Issue with making object callable in python - python

I wrote code like this
>>> class a(object):
def __init__(self):
self.__call__ = lambda x:x
>>> b = a()
I expected that object of class a should be callable object but eventually it is not.
>>> b()
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
b()
TypeError: 'a' object is not callable
>>> callable(b)
False
>>> hasattr(b,'__call__')
True
>>>
I can't understand why.
Please help me.

Special methods are looked up on the type (e.g., class) of the object being operated on, not on the specific instance. Think about it: otherwise, if a class defines __call__ for example, when the class is called that __call__ should get called... what a disaster! But fortunately the special method is instead looked up on the class's type, AKA metaclass, and all is well ("legacy classes" had very irregular behavior in this, which is why we're all better off with the new-style ones -- which are the only ones left in Python 3).
So if you need "per-instance overriding" of special methods, you have to ensure the instance has its own unique class. That's very easy:
class a(object):
def __init__(self):
self.__class__ = type(self.__class__.__name__, (self.__class__,), {})
self.__class__.__call__ = lambda x:x
and you're there. Of course that would be silly in this case, as every instance ends up with just the same "so-called per-instance" (!) __call__, but it would be useful if you really needed overriding on a per-individual-instance basis.

__call__ needs to be defined on the class, not the instance
class a(object):
def __init__(self):
pass
__call__ = lambda x:x
but most people probably find it more readable to define the method the usual way
class a(object):
def __init__(self):
pass
def __call__(self):
return self
If you need to have different behaviour for each instance you could do it like this
class a(object):
def __init__(self):
self.myfunc = lambda x:x
def __call__(self):
return self.myfunc(self)

What about this?
Define a base class AllowDynamicCall:
class AllowDynamicCall(object):
def __call__(self, *args, **kwargs):
return self._callfunc(self, *args, **kwargs)
And then subclass AllowDynamicCall:
class Example(AllowDynamicCall):
def __init__(self):
self._callfunc = lambda s: s

Related

Using metaclasses in order to define methods, class methods / instance methods

I am trying to understand deeper how metaclasses work in python. My problem is the following, I want to use metaclasses in order to define a method for each class which would use a class attribute defined within the metaclass. For instance, this has application for registration.
Here is a working example:
import functools
def dec_register(func):
#functools.wraps(func)
def wrapper_register(*args, **kwargs):
(args[0].__class__.list_register_instances).append(args[0])
return func(*args, **kwargs)
return wrapper_register
dict_register_classes = {}
class register(type):
def __new__(meta, name, bases, attrs):
dict_register_classes[name] = cls = type.__new__(meta, name, bases, attrs) # assigniation from right to left
cls.list_register_instances = []
cls.print_register = meta.print_register
return cls
def print_register(self):
for element in self.list_register_instances:
print(element)
def print_register_class(cls):
for element in cls.list_register_instances:
print(element)
#
class Foo(metaclass=register):
#dec_register
def __init__(self):
pass
def print_register(self):
pass
class Boo(metaclass=register):
#dec_register
def __init__(self):
pass
def print_register(self):
pass
f = Foo()
f_ = Foo()
b = Boo()
print(f.list_register_instances)
print(b.list_register_instances)
print(dict_register_classes)
print("1")
f.print_register()
print("2")
Foo.print_register_class()
print("3")
f.print_register_class()
print("4")
Foo.print_register()
The test I am making at the end do not work as I was expected. I apologize in advance if what I am saying is not using the proper syntax, I am trying to be as clear as possible :
I was thinking that the line cls.print_register = meta.print_register is defining a method within the class using the method defined within the metaclass. Thus it is a method that I can use on an object. I can also use it a class method since it is defined in the metaclass. However, though the following works :
print("1")
f.print_register()
this do not work correctly :
print("4")
Foo.print_register()
with error :
Foo.print_register()
TypeError: print_register() missing 1 required positional argument: 'self'
Same for test 2 and 3, where I was expecting that if a method is defined on the class level, it should also be defined on the object level. However, test 3 is raising an error.
print("2")
Foo.print_register_class()
print("3")
f.print_register_class()
Hence, can you please explain me how come my understanding of class methods is wrong ? I would like to be able to call the method print_register either on the class or on the object.
Perhaps it could help to know that in fact I was trying to reproduce the following very simple example :
# example without anything fancy:
class Foo:
list_register_instances = []
def __init__(self):
self.__class__.list_register_instances.append(self)
#classmethod
def print_register(cls):
for element in cls.list_register_instances:
print(element)
Am I not doing the exact same thing with a metaclass ? A classmethod can be used either on a class or on objects.
Also if you have any tips about code structure I would greatly appreciate it. I must be very bad at the syntax of metaclasses.
Fundamentally, because you have shadowed print_register on your instance of the metaclass (your class).
So when you do Foo.print_register, it finds the print_register you defined in
class Foo(metaclass=register):
...
def print_register(self):
pass
Which of course, is just the plain function print_register, which requires the self argument.
This is (almost) the same thing that would happen with just a regular class and it's instances:
class Foo:
def bar(self):
print("I am a bar")
foo = Foo()
foo.bar = lambda x: print("I've hijacked bar")
foo.bar()
Note:
In [1]: class Meta(type):
...: def print_register(self):
...: print('hi')
...:
In [2]: class Foo(metaclass=Meta):
...: pass
...:
In [3]: Foo.print_register()
hi
In [4]: class Foo(metaclass=Meta):
...: def print_register(self):
...: print('hello')
...:
In [5]: Foo.print_register()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-a42427fde947> in <module>
----> 1 Foo.print_register()
TypeError: print_register() missing 1 required positional argument: 'self'
However, you do this in your metaclass constructor as well!
cls.print_register = meta.print_register
Which is effectively like defining that function in your class definition... not sure why you are doing this though.
You are not doing the exact same thing as using a classmethod, which is a custom descriptor that handles the binding of methods to instances in just the way you'd need to be able to call it on a class or on an instance. That is not the same as defining a method on the class and on the instance! You could just do this in your metaclass __new__, i.e. cls.print_register = classmethod(meta.print_register) and leave def print_register(self) out of your class definitions:
import functools
def dec_register(func):
#functools.wraps(func)
def wrapper_register(*args, **kwargs):
(args[0].__class__.list_register_instances).append(args[0])
return func(*args, **kwargs)
return wrapper_register
dict_register_classes = {}
class register(type):
def __new__(meta, name, bases, attrs):
dict_register_classes[name] = cls = type.__new__(meta, name, bases, attrs) # assigniation from right to left
cls.list_register_instances = []
cls.print_register = classmethod(meta.print_register) # just create the classmethod manually!
return cls
def print_register(self):
for element in self.list_register_instances:
print(element)
def print_register_class(cls):
for element in cls.list_register_instances:
print(element)
#
class Foo(metaclass=register):
#dec_register
def __init__(self):
pass
Note, print_register doesn't have to be defined inside your metaclass, indeed, in this case, I would just define it at the module level:
def print_register(self):
for element in self.list_register_instances:
print(element)
...
class register(type):
def __new__(meta, name, bases, attrs):
dict_register_classes[name] = cls = type.__new__(meta, name, bases, attrs) # assigniation from right to left
cls.list_register_instances = []
cls.print_register = classmethod(print_register)
return cls
...
I think you understand metaclasses sufficiently, actually, it is your understanding of classmethod that is incorrect, as far as I can tell. If you want to understand how classmethod works, indeed, how method-instance binding works for regular functions, you need to understand descriptors. Here's an enlightening link. Function objects are descriptors, they bind the instance as the first argument to themselves when called on an instance (rather, they create a method object and return that, but it is basically partial application). classmethod objects are another kind of descriptor, one that binds the class to the first argument to the function it decorates when called on either the class or the instance. The link describes how you could write classmethod using pure python.

multiple python class inheritance

I am trying to understand python's class inheritance methods and I have some troubles figuring out how to do the following:
How can I inherit a method from a class conditional on the child's input?
I have tried the following code below without much success.
class A(object):
def __init__(self, path):
self.path = path
def something(self):
print("Function %s" % self.path)
class B(object):
def __init__(self, path):
self.path = path
self.c = 'something'
def something(self):
print('%s function with %s' % (self.path, self.c))
class C(A, B):
def __init__(self, path):
# super(C, self).__init__(path)
if path=='A':
A.__init__(self, path)
if path=='B':
B.__init__(self, path)
print('class: %s' % self.path)
if __name__ == '__main__':
C('A')
out = C('B')
out.something()
I get the following output:
class: A
class: B
Function B
While I would like to see:
class: A
class: B
B function with something
I guess the reason why A.something() is used (instead of B.something()) has to do with the python's MRO.
Calling __init__ on either parent class does not change the inheritance structure of your classes, no. You are only changing what initialiser method is run in addition to C.__init__ when an instance is created. C inherits from both A and B, and all methods of B are shadowed by those on A due to the order of inheritance.
If you need to alter class inheritance based on a value in the constructor, create two separate classes, with different structures. Then provide a different callable as the API to create an instance:
class CA(A):
# just inherit __init__, no need to override
class CB(B):
# just inherit __init__, no need to override
def C(path):
# create an instance of a class based on the value of path
class_map = {'A': CA, 'B': CB}
return class_map[path](path)
The user of your API still has name C() to call; C('A') produces an instance of a different class from C('B'), but they both implement the same interface so this doesn't matter to the caller.
If you have to have a common 'C' class to use in isinstance() or issubclass() tests, you could mix one in, and use the __new__ method to override what subclass is returned:
class C:
def __new__(cls, path):
if cls is not C:
# for inherited classes, not C itself
return super().__new__(cls)
class_map = {'A': CA, 'B': CB}
cls = class_map[path]
# this is a subclass of C, so __init__ will be called on it
return cls.__new__(cls, path)
class CA(C, A):
# just inherit __init__, no need to override
pass
class CB(C, B):
# just inherit __init__, no need to override
pass
__new__ is called to construct the new instance object; if the __new__ method returns an instance of the class (or a subclass thereof) then __init__ will automatically be called on that new instance object. This is why C.__new__() returns the result of CA.__new__() or CB.__new__(); __init__ is going to be called for you.
Demo of the latter:
>>> C('A').something()
Function A
>>> C('B').something()
B function with something
>>> isinstance(C('A'), C)
True
>>> isinstance(C('B'), C)
True
>>> isinstance(C('A'), A)
True
>>> isinstance(C('A'), B)
False
If neither of these options are workable for your specific usecase, you'd have to add more routing in a new somemethod() implementation on C, which then calls either A.something(self) or B.something(self) based on self.path. This becomes cumbersome really quickly when you have to do this for every single method, but a decorator could help there:
from functools import wraps
def pathrouted(f):
#wraps
def wrapped(self, *args, **kwargs):
# call the wrapped version first, ignore return value, in case this
# sets self.path or has other side effects
f(self, *args, **kwargs)
# then pick the class from the MRO as named by path, and call the
# original version
cls = next(c for c in type(self).__mro__ if c.__name__ == self.path)
return getattr(cls, f.__name__)(self, *args, **kwargs)
return wrapped
then use that on empty methods on your class:
class C(A, B):
#pathrouted
def __init__(self, path):
self.path = path
# either A.__init__ or B.__init__ will be called next
#pathrouted
def something(self):
pass # doesn't matter, A.something or B.something is called too
This is, however, becoming very unpythonic and ugly.
While Martijn's answer is (as usual) close to perfect, I'd just like to point out that from a design POV, inheritance is the wrong tool here.
Remember that implementation inheritance is actually a static and somehow restricted kind of composition/delegation, so as soon as you want something more dynamic the proper design is to eschew inheritance and go for full composition/delegation, canonical examples being the State and the Strategy patterns. Applied to your example, this might look something like:
class C(object):
def __init__(self, strategy):
self.strategy = strategy
def something(self):
return self.strategy.something(self)
class AStrategy(object):
def something(self, owner):
print("Function A")
class BStrategy(object):
def __init__(self):
self.c = "something"
def something(self, owner):
print("B function with %s" % self.c)
if __name__ == '__main__':
a = C(AStrategy())
a.something()
b = C(BStrategy())
b.something()
Then if you need to allow the user to specify the strategy by name (as string), you can add the factory pattern to the solution
STRATEGIES = {
"A": AStrategy,
"B": BStrategy,
}
def cfactory(strategy_name):
try:
strategy_class = STRATEGIES[strategy_name]
except KeyError:
raise ValueError("'%s' is not a valid strategy" % strategy_name)
return C(strategy_class())
if __name__ == '__main__':
a = cfactory("A")
a.something()
b = cfactory("B")
b.something()
Martijn's answer explained how to choose an object inheriting from one of two classes. Python also allows to easily forward a method to a different class:
>>> class C:
parents = { 'A': A, 'B': B }
def __init__(self, path):
self.parent = C.parents[path]
self.parent.__init__(self, path) # forward object initialization
def something(self):
self.parent.something(self) # forward something method
>>> ca = C('A')
>>> cb = C('B')
>>> ca.something()
Function A
>>> cb.something()
B function with something
>>> ca.path
'A'
>>> cb.path
'B'
>>> cb.c
'something'
>>> ca.c
Traceback (most recent call last):
File "<pyshell#46>", line 1, in <module>
ca.c
AttributeError: 'C' object has no attribute 'c'
>>>
But here class C does not inherit from A or B:
>>> C.__mro__
(<class '__main__.C'>, <class 'object'>)
Below is my original solution using monkey patching:
>>> class C:
parents = { 'A': A, 'B': B }
def __init__(self, path):
parent = C.parents[path]
parent.__init__(self, path) # forward object initialization
self.something = lambda : parent.something(self) # "borrow" something method
it avoids the parent attribute in C class, but is less readable...

super() does not work together with type()? : super(type, obj): obj must be an instance or subtype of type

In the code, new_type is a class created with members from class X and derived from class A. Any workaround for the TypeError?
class A:
def __init__(self):
pass
def B(self):
pass
def C(self):
pass
class X:
def __init__(self):
print(type(self).__bases__)
super().__init__()
def B(self):
self.B()
def Z(self):
pass
a = X()
print('ok')
new_type = type("R", ( A,), dict(X.__dict__))
some_obj = new_type()
Program output:
(<class 'object'>,)
ok
(<class '__main__.A'>,)
Traceback (most recent call last):
File "c:\Evobase2005\Main\EvoPro\dc\tests\sandbox.py", line 37, in <module>
some_obj = new_type()
File "c:\Evobase2005\Main\EvoPro\dc\tests\sandbox.py", line 27, in __init__
super().__init__()
TypeError: super(type, obj): obj must be an instance or subtype of type
In production code, class A does not exist either, but is created dynamically as well because it uses resources from a c++ library for class construction. hence the twisted code. ;)
EDIT This fails too.
class X:
def __init__(self):
print(type(self).__bases__)
super().__init__()
def Z(self):
pass
new_type = type("R", (object, ), dict(X.__dict__))
some_obj = new_type()
super() has two forms, two-argument form and zero argument form, quoting standard library docs:
The two argument form specifies the arguments exactly and makes the appropriate references. The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods.
The zero argument form will not work as it automatically searches the stack frame for the class (__class__) and the first argument and gets confused.
However, when you use the two-argument form of super(), the code works fine:
class A:
def __init__(self):
pass
class X:
def __init__(self):
print(type(self).__bases__)
super(self.__class__, self).__init__()
x = X()
R = type("R", (A,), dict(X.__dict__))
obj = R()
Output:
(<class 'object'>,)
(<class '__main__.A'>,)
You cannot use super(self.__class__, self) more than once though in the call hierarchy or you run into infinite recursion, see this SO answer.

How to create object of derived class inside base class in Python?

I have a code like this:
class Base:
def __init__(self):
pass
def new_obj(self):
return Base() # ← return Derived()
class Derived(Base):
def __init__(self):
pass
In the line with a comment I actually want not exactly the Derived object, but any object of class that self really is.
Here is a real-life example from Mercurial.
How to do that?
def new_obj(self):
return self.__class__()
I can't think of a really good reason to do this, but as D.Shawley pointed out:
def new_obj(self):
return self.__class__()
will do it.
That's because when calling a method on a derived class, if it doesn't exist on that class, it will use the method resolution order to figure out which method to call on its inheritance chain. In this case, you've only got one, so it's going to call Base.new_obj and pass in the instance as the first argument (i.e. self).
All instances have a __class__ attribute, that refers to the class that they are an instance of. So given
class Base:
def new_obj(self):
return self.__class__()
class Derived(Base): pass
derived = Derived()
The following lines are functionally equivalent:
derived.new_obj()
# or
Base.new_obj(derived)
You may have encountered a relative of this if you've either forgotten to add the self parameter to your function declaration, or not provided enough arguments to a function and seen a stack trace that looks like this:
>>> f.bar()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: bar() takes exactly 2 arguments (1 given)
You can use a classmethod:
class Base:
def __init__(self):
pass
#classmethod
def new_obj(cls):
return cls()
class Derived(Base):
def __init__(self):
pass
>>> b = Base()
>>> b.new_obj()
<__main__.Base at 0x10fc12208>
>>> d = Derived()
>>> d.new_obj()
<__main__.Derived at 0x10fdfce80>
You can also do this with a class method, which you create with a decorator.
In [1]: class Base:
...: #classmethod
...: def new_obj(cls):
...: return cls()
...:
In [2]: class Derived(Base): pass
In [3]: print type(Base.new_obj())
<type 'instance'>
In [4]: print Base.new_obj().__class__
__main__.Base
In [5]: print Derived.new_obj().__class__
__main__.Derived
Incidentally (you may know this), you don't have to create __init__ methods if you don't do anything with them.

python: super()-like proxy object that starts the MRO search at a specified class

According to the docs, super(cls, obj) returns
a proxy object that delegates method calls to a parent or sibling
class of type cls
I understand why super() offers this functionality, but I need something slightly different: I need to create a proxy object that delegates methods calls (and attribute lookups) to class cls itself; and as in super, if cls doesn't implement the method/attribute, my proxy should continue looking in the MRO order (of the new not the original class). Is there any function I can write that achieves that?
Example:
class X:
def act():
#...
class Y:
def act():
#...
class A(X, Y):
def act():
#...
class B(X, Y):
def act():
#...
class C(A, B):
def act():
#...
c = C()
b = some_magic_function(B, c)
# `b` needs to delegate calls to `act` to B, and look up attribute `s` in B
# I will pass `b` somewhere else, and have no control over it
Of course, I could do b = super(A, c), but that relies on knowing the exact class hierarchy and the fact that B follows A in the MRO. It would silently break if any of these two assumptions change in the future. (Note that super doesn't make any such assumptions!)
If I just needed to call b.act(), I could use B.act(c). But I am passing b to someone else, and have no idea what they'll do with it. I need to make sure it doesn't betray me and start acting like an instance of class C at some point.
A separate question, the documentation for super() (in Python 3.2) only talks about its method delegation, and does not clarify that attribute lookups for the proxy are also performed the same way. Is it an accidental omission?
EDIT
The updated Delegate approach works in the following example as well:
class A:
def f(self):
print('A.f')
def h(self):
print('A.h')
self.f()
class B(A):
def g(self):
self.f()
print('B.g')
def f(self):
print('B.f')
def t(self):
super().h()
a_true = A()
# instance of A ends up executing A.f
a_true.h()
b = B()
a_proxy = Delegate(A, b)
# *unlike* super(), the updated `Delegate` implementation would call A.f, not B.f
a_proxy.h()
Note that the updated class Delegate is closer to what I want than super() for two reasons:
super() only does it proxying for the first call; subsequent calls will happen as normal, since by then the object is used, not its proxy.
super() does not allow attribute access.
Thus, my question as asked has a (nearly) perfect answer in Python.
It turns out that, at a higher level, I was trying to do something I shouldn't (see my comments here).
This class should cover the most common cases:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self._delegate_obj)
return x
Use it like this:
b = Delegate(B, c)
(with the names from your example code.)
Restrictions:
You cannot retrieve some special attributes like __class__ etc. from the class you pass in the constructor via this proxy. (This restistions also applies to super.)
This might behave weired if the attribute you want to retrieve is some weired kind of descriptor.
Edit: If you want the code in the update to your question to work as desired, you can use the foloowing code:
class Delegate:
def __init__(self, cls):
self._delegate_cls = cls
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self)
return x
This passes the proxy object as self parameter to any called method, and it doesn't need the original object at all, hence I deleted it from the constructor.
If you also want instance attributes to be accessible you can use this version:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
if name in vars(self._delegate_obj):
return getattr(self._delegate_obj, name)
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self)
return x
A separate question, the documentation for super() (in Python 3.2)
only talks about its method delegation, and does not clarify that
attribute lookups for the proxy are also performed the same way. Is it
an accidental omission?
No, this is not accidental. super() does nothing for attribute lookups. The reason is that attributes on an instance are not associated with a particular class, they're just there. Consider the following:
class A:
def __init__(self):
self.foo = 'foo set from A'
class B(A):
def __init__(self):
super().__init__()
self.bar = 'bar set from B'
class C(B):
def method(self):
self.baz = 'baz set from C'
class D(C):
def __init__(self):
super().__init__()
self.foo = 'foo set from D'
self.baz = 'baz set from D'
instance = D()
instance.method()
instance.bar = 'not set from a class at all'
Which class "owns" foo, bar, and baz?
If I wanted to view instance as an instance of C, should it have a baz attribute before method is called? How about afterwards?
If I view instance as an instance of A, what value should foo have? Should bar be invisible because was only added in B, or visible because it was set to a value outside the class?
All of these questions are nonsense in Python. There's no possible way to design a system with the semantics of Python that could give sensible answers to them. __init__ isn't even special in terms of adding attributes to instances of the class; it's just a perfectly ordinary method that happens to be called as part of the instance creation protocol. Any method (or indeed code from another class altogether, or not from any class at all) can create attributes on any instance it has a reference to.
In fact, all of the attributes of instance are stored in the same place:
>>> instance.__dict__
{'baz': 'baz set from C', 'foo': 'foo set from D', 'bar': 'not set from a class at all'}
There's no way to tell which of them were originally set by which class, or were last set by which class, or whatever measure of ownership you want. There's certainly no way to get at "the A.foo being shadowed by D.foo", as you would expect from C++; they're the same attribute, and any writes to to it by one class (or from elsewhere) will clobber a value left in it by the other class.
The consequence of this is that super() does not perform attribute lookups the same way it does method lookups; it can't, and neither can any code you write.
In fact, from running some experiments, neither super nor Sven's Delegate actually support direct attribute retrieval at all!
class A:
def __init__(self):
self.spoon = 1
self.fork = 2
def foo(self):
print('A.foo')
class B(A):
def foo(self):
print('B.foo')
b = B()
d = Delegate(A, b)
s = super(B, b)
Then both work as expected for methods:
>>> d.foo()
A.foo
>>> s.foo()
A.foo
But:
>>> d.fork
Traceback (most recent call last):
File "<pyshell#43>", line 1, in <module>
d.fork
File "/tmp/foo.py", line 6, in __getattr__
x = getattr(self._delegate_cls, name)
AttributeError: type object 'A' has no attribute 'fork'
>>> s.spoon
Traceback (most recent call last):
File "<pyshell#45>", line 1, in <module>
s.spoon
AttributeError: 'super' object has no attribute 'spoon'
So they both only really work for calling some methods on, not for passing to arbitrary third party code to pretend to be an instance of the class you want to delegate to.
They don't behave the same way in the presence of multiple inheritance unfortunately. Given:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self._delegate_obj)
return x
class A:
def foo(self):
print('A.foo')
class B:
pass
class C(B, A):
def foo(self):
print('C.foo')
c = C()
d = Delegate(B, c)
s = super(C, c)
Then:
>>> d.foo()
Traceback (most recent call last):
File "<pyshell#50>", line 1, in <module>
d.foo()
File "/tmp/foo.py", line 6, in __getattr__
x = getattr(self._delegate_cls, name)
AttributeError: type object 'B' has no attribute 'foo'
>>> s.foo()
A.foo
Because Delegate ignores the full MRO of whatever class _delegate_obj is an instance of, only using the MRO of _delegate_cls. Whereas super does what you asked in the question, but the behaviour seems quite strange: it's not wrapping an instance of C to pretend it's an instance of B, because direct instances of B don't have foo defined.
Here's my attempt:
class MROSkipper:
def __init__(self, cls, obj):
self.__cls = cls
self.__obj = obj
def __getattr__(self, name):
mro = self.__obj.__class__.__mro__
i = mro.index(self.__cls)
if i == 0:
# It's at the front anyway, just behave as getattr
return getattr(self.__obj, name)
else:
# Check __dict__ not getattr, otherwise we'd find methods
# on classes we're trying to skip
try:
return self.__obj.__dict__[name]
except KeyError:
return getattr(super(mro[i - 1], self.__obj), name)
I rely on the __mro__ attribute of classes to properly figure out where to start from, then I just use super. You could walk the MRO chain from that point yourself checking class __dict__s for methods instead if the weirdness of going back one step to use super is too much.
I've made no attempt to handle unusual attributes; those implemented with descriptors (including properties), or those magic methods looked up behind the scenes by Python, which often start at the class rather than the instance directly. But this behaves as you asked moderately well (with the caveat expounded on ad nauseum in the first part of my post; looking up attributes this way will not give you any different results than looking them up directly in the instance).

Categories