Polymorphism with callables in Python - python

I have an interface class called iResource, and a number of subclasses, each of which implement the "request" method. The request functions use socket I/O to other machines, so it makes sense to run them asynchronously, so those other machines can work in parallel.
The problem is that when I start a thread with iResource.request and give it a subclass as the first argument, it'll call the superclass method. If I try to start it with "type(a).request" and "a" as the first argument, I get "" for the value of type(a). Any ideas what that means and how to get the true type of the method? Can I formally declare an abstract method in Python somehow?
EDIT: Including code.
def getSocialResults(self, query=''):
#for a in self.types["social"]: print type(a)
tasks = [type(a).request for a in self.types["social"]]
argss = [(a, query, 0) for a in self.types["social"]]
grabbers = executeChainResults(tasks, argss)
return igrabber.cycleGrabber(grabbers)
"executeChainResults" takes a list "tasks" of callables and a list "argss" of args-tuples, and assumes each returns a list. It then executes each in a separate thread, and concatenates the lists of results. I can post that code if necessary, but I haven't had any problems with it so I'll leave it out for now.
The objects "a" are DEFINITELY not of type iResource, since it has a single constructor that just throws an exception. However, replacing "type(a).request" with "iResource.request" invokes the base class method. Furthermore, calling "self.types["social"][0].request" directly works fine, but the above code gives me: "type object 'instance' has no attribute 'request'".
Uncommenting the commented line prints <type 'instance'> several times.

You can just use the bound method object itself:
tasks = [a.request for a in self.types["social"]]
# ^^^^^^^^^
grabbers = executeChainResults(tasks, [(query, 0)] * len(tasks))
# ^^^^^^^^^^^^^^^^^^^^^^^^^
If you insist on calling your methods through the base class you could also do it like this:
from abc import ABCMeta
from functools import wraps
def virtualmethod(method):
method.__isabstractmethod__ = True
#wraps(method)
def wrapper(self, *args, **kwargs):
return getattr(self, method.__name__)(*args, **kwargs)
return wrapper
class IBase(object):
__metaclass__ = ABCMeta
#virtualmethod
def my_method(self, x, y):
pass
class AddImpl(IBase):
def my_method(self, x, y):
return x + y
class MulImpl(IBase):
def my_method(self, x, y):
return x * y
items = [AddImpl(), MulImpl()]
for each in items:
print IBase.my_method(each, 3, 4)
b = IBase() # <-- crash
Result:
7
12
Traceback (most recent call last):
File "testvirtual.py", line 30, in <module>
b = IBase()
TypeError: Can't instantiate abstract class IBase with abstract methods my_method
Python doesn't support interfaces as e.g. Java does. But with the abc module you can ensure that certain methods must be implemented in subclasses. Normally you would do this with the abc.abstractmethod() decorator, but you still could not call the subclasses method through the base class, like you intend. I had a similar question once and I had the idea of the virtualmethod() decorator. It's quite simple. It essentially does the same thing as abc.abstratmethod(), but also redirects the call to the subclasses method. The specifics of the abc module can be found in the docs and in PEP3119.
BTW: I assume you're using Python >= 2.6.

The reference to "<type "instance" >" you get when you are using an "old style class" in Python - i.e.: classes not derived from the "object" type hierarchy. Old style classes are not supposed to work with several of the newer features of the language, including descriptors and others. AND, among other things, - you can't retrieve an attribute (or method) from the class of an old style class using what you are doing:
>>> class C(object):
... def c(self): pass
...
>>> type (c)
<class '__main__.C'>
>>> c = C()
>>> type(c).c
<unbound method C.c>
>>> class D: #not inheriting from object: old style class
... def d(self): pass
...
>>> d = D()
>>> type(d).d
>>> type(d)
<type 'instance'>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'instance' has no attribute 'd'
>>>
Therefore, just make your base class inherit from "object" instead of "nothing" and check if you still get the error message when requesting the "request" method from type(a) :
As for your other observation:
"The problem is that when I start a thread with iResource.request and give it a subclass as the first argument, it'll call the superclass method."
It seems that the "right" thing for it to do is exactly that:
>>> class A(object):
... def b(self):
... print "super"
...
>>> class B(A):
... def b(self):
... print "child"
...
>>> b = B()
>>> A.b(b)
super
>>>
Here, I call a method in the class "A" giving it an specialized instance of "A" - the method is still the one in class "A".

Related

Why staticmethod decorator not needed?

I'm trying to use a class decorator to achieve a singleton pattern as below:
python3.6+
def single_class(cls):
cls._instance = None
origin_new = cls.__new__
# #staticmethod
# why staticmethod decorator is not needed here?
def new_(cls, *args, **kwargs):
if cls._instance:
return cls._instance
cls._instance = cv = origin_new(cls)
return cv
cls.__new__ = new_
return cls
#single_class
class A():
...
a = A()
b = A()
print(a is b ) # True
The singleton pattern seems to be working well, but I'm wondering why #staticmethod is not needed above the function new_ in my code, as I know that cls.__new__ is a static method.
class object:
""" The most base type """
...
#staticmethod # known case of __new__
def __new__(cls, *more): # known special case of object.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
...
Update test with python2.7+
The #staticmethod seems to be needed in py2 and not needed in py3
def single_class(cls):
cls._instance = None
origin_new = cls.__new__
# #staticmethod
# without #staticmethod there will be a TypeError
# and work fine with #staticmethod adding
def new_(cls, *args, **kwargs):
if cls._instance:
return cls._instance
cls._instance = cv = origin_new(cls)
return cv
cls.__new__ = new_
return cls
#single_class
class A(object):
pass
a = A()
b = A()
print(a is b )
# TypeError: unbound method new_() must be called with A instance as the first argument (got type instance instead)
__new__ explicitly takes the class instance as its first argument. __new__, as mentioned in other answers, is a special case and a possible reason for it to be staticmethod is to allow the creation of other classes using new:
super(CurrentClass, cls).__new__(otherCls, *args, **kwargs)
The reason why your code works without #staticmethod decorator in Python 3 but doesn't work in Python 2 is because of the difference in how Python 2 and Python 3 allow a class's method access.
There is no unbounded method in Python 3 [ 2 ]. When you try to access a class method on Python 3 you get a function whereas in Python 2 you get unbounded method. You can see this if you do:
# Python 2
>>> A.__new__
<unbound method A.new_>
# Python 3
>>> A.__new__
<function __main__.single_class.<locals>.new_(cls, *args, **kwargs)>
In Python 2, your decorator is equal to single_class.__new__(A) but since __new__ is an unbound method you can't call it with the class itself. You need a class instance but for that, you need to create your class(catch-22) and that's why staticmethod is needed. The error message says the same thing unbound method new_() must be called with A instance as first argument.
Whereas in Python 3 __new__ is treated as a function you can call it with class A itself. So, single_class.__new__(A) will work.
From the docs:
Called to create a new instance of class cls. __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument.
(My emphasis.)
You might also want to have a look at this SO answer.
It's not needed for this special method because that's the official spec (docs.python.org/3/reference/datamodel.html#object.new). Quite simply.
EDIT:
The #staticmethod seems to be needed in py2
It's not:
bruno#bruno:~$ python2
Python 2.7.17 (default, Nov 7 2019, 10:07:09)
[GCC 7.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
pythonrc start
pythonrc done
>>> class Foo(object):
... def __new__(cls, *args, **kw):
... print("hello %s" % cls)
... return object.__new__(cls, *args, **kw)
...
>>> f = Foo()
hello <class '__main__.Foo'>
but your example is quite a corner case since you're rebinding this method after the class has been created, and then it stop working in py2 indeed:
>>> class Bar(object):
... pass
...
>>> def new(cls, *args, **kw):
... print("yadda %s" % cls)
... return object.__new__(cls, *args, **kw)
...
>>> Bar.__new__ = new
>>> Bar()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method new() must be called with Bar instance as first argument (got type instance instead)
I assume that in py2, __new__ (if present) is special-cased by the metaclass constructor (type.__new__ / type.__init__) which wraps it in a staticmethod:
>>> Foo.__dict__["__new__"]
<staticmethod object at 0x7fe11e15af50>
>>> Bar.__dict__["__new__"]
<function new at 0x7fe11e12b950>
There have been a couple changes in the object model between py2 and py3 which probably explain the different behaviour here, one might be able to find the exact info somewhere in the release notes.

Mock modules and subclasses (TypeError: Error when calling the metaclass bases)

To compile documentation on readthedocs, the module h5py has to be mocked. I get an error which can be reproduced with this simple code:
from __future__ import print_function
import sys
try:
from unittest.mock import MagicMock
except ImportError:
# Python 2
from mock import Mock as MagicMock
class Mock(MagicMock):
#classmethod
def __getattr__(cls, name):
return Mock()
sys.modules.update({'h5py': Mock()})
import h5py
print(h5py.File, type(h5py.File))
class A(h5py.File):
pass
print(A, type(A))
class B(A):
pass
The output of this script is:
<Mock id='140342061004112'> <class 'mock.mock.Mock'>
<Mock spec='str' id='140342061005584'> <class 'mock.mock.Mock'>
Traceback (most recent call last):
File "problem_mock.py", line 32, in <module>
class B(A):
TypeError: Error when calling the metaclass bases
str() takes at most 1 argument (3 given)
What is the correct way to mock h5py and h5py.File?
It seems to me that this issue is quite general for documentation with readthedocs where some modules have to be mocked. It would be useful for the community to have an answer.
You can't really use Mock instances to act as classes; it fails hard on Python 2, and works by Python 3 only by accident (see below).
You'd have to return the Mock class itself instead if you wanted them to work in a class hierarchy:
>>> class A(Mock): # note, not called!
... pass
...
>>> class B(A):
... pass
...
>>> B
<class '__main__.B'>
>>> B()
<B id='4394742480'>
If you can't import h5py at all, that means you'll need to keep a manually updated list of classes where you return the class rather than an instance:
_classnames = {
'File',
# ...
}
class Mock(MagicMock):
#classmethod
def __getattr__(cls, name):
return Mock if name in _classnames else Mock()
This is not foolproof; there is no way to detect the parent instance in a classmethod, so h5py.File().File would result in yet another 'class' being returned even if in the actual implementation that was some other object instead. You could partially work around that by creating a new descriptor to use instead of the classmethod decorator that would bind to either the class or to an instance if one is available; that way you at least would have a context in the form of self._mock_name on instances of your Mock class.
In Python 3, using MagicMock directly without further customisation works when used as a base class:
>>> from unittest.mock import MagicMock
>>> h5py = MagicMock()
>>> class A(h5py.File): pass
...
>>> class B(A): pass
...
but this is not really intentional and supported behaviour; the classes and subclasses are 'specced' from the classname string:
>>> A
<MagicMock spec='str' id='4353980960'>
>>> B
<MagicMock spec='str' id='4354132344'>
and thus have all sorts of issues down the line as instanciation doesn't work:
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.5/unittest/mock.py", line 917, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.5/unittest/mock.py", line 976, in _mock_call
result = next(effect)
StopIteration

How to make an Python subclass uncallable

How do you "disable" the __call__ method on a subclass so the following would be true:
class Parent(object):
def __call__(self):
return
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
object.__setattr__(self, '__call__', None)
>>> c = Child()
>>> callable(c)
False
This and other ways of trying to set __call__ to some non-callable value still result in the child appearing as callable.
You can't. As jonrsharpe points out, there's no way to make Child appear to not have the attribute, and that's what callable(Child()) relies on to produce its answer. Even making it a descriptor that raises AttributeError won't work, per this bug report: https://bugs.python.org/issue23990 . A python 2 example:
>>> class Parent(object):
... def __call__(self): pass
...
>>> class Child(Parent):
... __call__ = property()
...
>>> c = Child()
>>> c()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> c.__call__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> callable(c)
True
This is because callable(...) doesn't act out the descriptor protocol. Actually calling the object, or accessing a __call__ attribute, involves retrieving the method even if it's behind a property, through the normal descriptor protocol. But callable(...) doesn't bother going that far, if it finds anything at all it is satisfied, and every subclass of Parent will have something for __call__ -- either an attribute in a subclass, or the definition from Parent.
So while you can make actually calling the instance fail with any exception you want, you can't ever make callable(some_instance_of_parent) return False.
It's a bad idea to change the public interface of the class so radically from the parent to the base.
As pointed out elsewhere, you cant uninherit __call__. If you really need to mix in callable and non callable classes you should use another test (adding a class attribute) or simply making it safe to call the variants with no functionality.
To do the latter, You could override the __call__ to raise NotImplemented (or better, a custom exception of your own) if for some reason you wanted to mix a non-callable class in with the callable variants:
class Parent(object):
def __call__(self):
print "called"
class Child (Parent):
def __call__(self):
raise NotACallableInstanceException()
for child_or_parent in list_of_children_and_parents():
try:
child_or_parent()
except NotACallableInstanceException:
pass
Or, just override call with pass:
class Parent(object):
def __call__(self):
print "called"
class Child (Parent):
def __call__(self):
pass
Which will still be callable but just be a nullop.

Higher order classes in Python

Can anyone explain why the following code doesn't work? I'm trying to make a class decorator to provide new __repr__ and __init__ methods, and if I decorate a class with it only the repr method seems to get defined. I managed to fix the original problem by making the decorator modify the original class destructively instead of creating a new class (e.g. it defines the new methods and then just uses cl.__init__ = __init__ to overwrite them). Now I'm just curious why the subclassing-based attempt didn't work.
def higherorderclass(cl):
#functools.wraps(cl)
class wrapped(cl):
def __init__(self, *args, **kwds):
print 'in wrapped init'
super(wrapped, self).__init__(*args, **kwds)
def __repr__(self):
return 'in wrapped repr'
return wrapped
The first problem is that you're using old-style classes. (That is, classes that don't inherit from object, another built-in type, or another new-style class.) Special method lookup works differently in old-style classes. Really, you don't want to learn how it works; just use new-style classes instead.
But then you run into the next problem: functools.wraps doesn't work on classes in the first place. With new-style classes, you will get some kind of AttributeError; with old-style classes, things just silently fail in various ways. And you can't just use update_wrapper explicitly either. The problem is that you're trying to replace attributes of the class that aren't writeable, and there's no (direct) way around that.
If you use new-style classes, and don't try to wraps them, everything works fine.
Remove the #functools.wraps() decorator, this only applies to function decorators. With a newstyle class your decorator fails with:
>>> #higherorderclass
... class Foo(object):
... def __init__(self):
... print 'in foo init'
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 3, in higherorderclass
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: attribute '__doc__' of 'type' objects is not writable
Without the #functools.wraps() line your decorator works just fine:
>>> def higherorderclass(cl):
... class wrapped(cl):
... def __init__(self, *args, **kwds):
... print 'in wrapped init'
... super(wrapped, self).__init__(*args, **kwds)
... def __repr__(self):
... return 'in wrapped repr'
... return wrapped
...
>>> #higherorderclass
... class Foo(object):
... def __init__(self):
... print 'in foo init'
...
>>> Foo()
in wrapped init
in foo init
in wrapped repr

Use `object` to instantiate custom class

here is my haha class
class haha(object):
def theprint(self):
print "i am here"
>>> haha().theprint()
i am here
>>> haha(object).theprint()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object.__new__() takes no parameters
why haha(object).theprint() get wrong output?
class haha(object): means that haha inherits from object. Inheriting from object basically means that it's a new-style class.
Calling haha() creates a new instance of haha and thus calls the constructor which would be a method named __init__. However, you do not have one so the defaul constructor is used which does not accept any parameters.
This example of a slight change of your haha may help you to understand what is happening. I've implemented __init__ so you can see when it is called.
>>> class haha(object):
... def __init__(self, arg=None):
... print '__init__ called on a new haha with argument %r' % (arg,)
... def theprint(self):
... print "i am here"
...
>>> haha().theprint()
__init__ called on a new haha with argument None
i am here
>>> haha(object).theprint()
__init__ called on a new haha with argument <type 'object'>
i am here
As you can see, haha(object) ends up passing object as a parameter to __init__. Since you hadn't implemented __init__, you were getting an error because the default __init__ does not accept parameters. As you can see, it doesn't make much sense to do that.
You're confusing Inheritance with initializing a class when instantiate.
In this case, for your class declaration, you should do
class haha(object):
def theprint(self):
print "i am here"
>>> haha().theprint()
i am here
Because haha(object) means that haha inherits from object. In python, there is no need to write this because all classes inherits from object by default.
If you have an init method which receives parameters, you need to pass those arguments when instantiating, for example
class haha():
def __init__(self, name):
self.name=name
def theprint(self):
print 'hi %s i am here' % self.name
>>> haha('iferminm').theprint()
hi iferminm i am here

Categories