How can I tell if a class has a method `__call__`? - python

A very simple class isn't a callable type:
>>> class myc:
... pass
...
>>> c=myc()
>>> callable(c)
False
How can I tell if a class has a method __call__? Why do the following two ways give opposite results?
>>> myc.__call__
<method-wrapper '__call__' of type object at 0x1104b18>
>>> __call__ in myc.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name '__call__' is not defined
Thanks.

myc.__call__ is giving you the __call__ method used to call myc itself, not instances of myc. It's the method invoked when you do
new_myc_instance = myc()
not when you do
new_myc_instance()
__call__ in myc.__dict__ gives you a NameError because you forgot to put quotation marks around __call__, and if you'd remembered to put quotation marks, it would have given you False because myc.__call__ isn't found through myc.__dict__. It's found through type(myc).__dict__ (a.k.a. type.__dict__, because type(myc) is type).
To check if myc has a __call__ implementation for its instances, you could perform a manual MRO search, but collections.abc.Callable already implements that for you:
issubclass(myc, collections.abc.Callable)

The reason why myc.__call__ gives you something back 1 instead of raising an AttributeError is because the metaclass of myc (type) has a __call__ method. And if an attribute isn't found on the instance (in this case your class) it's looked up on the class (in this case the metaclass).
For example if you had a custom metaclass the __call__ lookup would've returned something different:
class mym(type):
def __call__(self, *args, **kwargs):
return super().__call__(*args, **kwargs)
class myc(metaclass=mym):
pass
>>> myc.__call__
<bound method mym.__call__ of <class '__main__.myc'>>
Regarding the __call__ in myc.__dict__ that has been answered in the comments and the other answer already: You just forgot the quotations:
>>> '__call__' in myc.__dict__
False
However it could also be that myc subclasses a callable class, in that case it would also give False:
class mysc:
def __call__(self):
return 10
class myc(mysc):
pass
>>> '__call__' in myc.__dict__
False
So you need something more robust. Like searching all the superclasses:
>>> any('__call__' in cls.__dict__ for cls in myc.__mro__)
True
Or as pointed out by user2357112 use collections.Callable which overrides the issubclass-check so that it looks for __call__:
>>> from collections.abc import Callable
>>> issubclass(myc, Callable)
True
1 That's also the reason why you can't just use hasattr(myc, '__call__') to find out if it has a __call__ method itself.

Related

How do I mock methods of a decorated class in python?

Having trouble with applying mocks to a class with a decorator. If I write the class without a decorator, patches are applied as expected. However, once the class is decorated, the same patch fails to apply.
What's going on here, and what's the best way to approach testing classes that may be decorated?
Here's a minimal reproduction.
# module.py
import functools
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
#decorator # comment this out and the test passes
class Something:
def do_external(self):
raise Exception("should be mocked")
def run(self):
self.do_external()
# test_module.py
from unittest import TestCase
from unittest.mock import Mock, patch
from module import Something
class TestModule(TestCase):
#patch('module.Something.do_external', Mock())
def test_module(self):
s = Something()
s.run()
If you prefer, here's an online reproducible example of the issue.
So, as I stated in the comment, your wrapper function replaces Something in the module module namespace. So, putting your code in module.py on my computer, observe:
>>> import module
>>> type(module.Something)
<class 'function'>
Since you used the functools.wraps decorator, the object being wrapped is added to the wrapper function at .__wrapped__:
>>> module.Something.__wrapped__
<class 'module.Something'>
>>> type(module.Something.__wrapped__)
<class 'type'>
So when you patch module.Something, you are patching the function object, not the class object. But instances of your class directly reference the class internally, it doesn't matter what global name refers to it. So, observe some more:
>>> import unittest.mock as mock
>>> with mock.patch('module.Something.do_external', mock.Mock()):
... print(module.Something.do_external)
... print(module.Something.__wrapped__.do_external)
...
<Mock id='140609580169920'>
<function Something.do_external at 0x7fe23822cc10>
This is why we see this particular behavior:
>>> with mock.patch('module.Something.do_external', mock.Mock()):
... module.Something().do_external()
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/jarrivillaga/module.py", line 18, in do_external
raise Exception("should be mocked")
Exception: should be mocked
In this particular case, because the __wrapped__ attribute references the original class, we can patch that:
>>> with mock.patch('module.Something.__wrapped__.do_external', mock.Mock()):
... module.Something().do_external()
...
<Mock name='mock()' id='140608505553680'>
But I highly suggest rethinking your decorator design, if this is meant for external/public use. But just fundamentally, module.Something is not a class, it is a function, so you cannot treat it like a class and expect it to work like a class.
Note, the fact that you used wraps makes it possible for the patch to work at all, although, it just hides the problem because putting those other functions as attributes of the wrapper function don't really provide anything useful. wraps is mostly meant to be used when wrapping other functions, where creating a new function that looks like the old function makes sense, in the case of a class, though, you are making a function look like a class, but only superficially. Just removing the #wraps line, observe:
>>> import module
>>> import unittest.mock as mock
>>> with mock.patch('module.Something.do_external', mock.Mock()):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jarrivillaga/miniconda3/lib/python3.9/unittest/mock.py", line 1404, in __enter__
original, local = self.get_original()
File "/Users/jarrivillaga/miniconda3/lib/python3.9/unittest/mock.py", line 1377, in get_original
raise AttributeError(
AttributeError: <function decorator.<locals>.wrapper at 0x7ff820081160> does not have the attribute 'do_external'
So functools.wraps here was just hiding a fundamental error.

When does copy(foo) call foo.copy?

I'm designing a base class, and I want it to define a base behaviour for copy.copy.
This behaviour consists in printing a warning in the console, and then copying the instance as if it had no __copy__ attribute.
When one defines a blank Foo class and copies an instance of it, the copy function returns a new instance of that class, as shown by the following session:
>>> class Foo: pass
...
>>> foo = Foo()
>>> foo2 = copy(foo)
>>> foo is foo2
False
Now if the Foo class defines a __copy__ instance method, the latter will be called when trying to pass an instance to copy:
>>> class Foo:
... def __copy__(self):
... print("Copying")
...
>>> foo = Foo()
>>> copy(foo)
Copying
So as I understand it, the flow of execution of the copy function would be:
Try to access the object's __copy__ attribute
If present, call it
If absent, perform a generic copy
But now, I want to capture the copy function's accessing the __copy__ attribute, through defining a __getattr__ method, and then simulate the absence of this attribute, by raising an AttributeError:
>>> class Foo:
... def __getattr__(self, attr):
... if attr == '__copy__':
... print("Accessing '__copy__'")
... raise AttributeError
...
Then the __copy__ attribute does not seem to be accessed anymore:
>>> foo = Foo()
# Actual behaviour of copy(foo)
>>> copy(foo)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Miniconda3\lib\copy.py", line 96, in copy
rv = reductor(4)
TypeError: 'NoneType' object is not callable
# Expected behaviour of copy(foo)
>>> foo.__copy__()
Accessing '__copy__'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __getattr__
AttributeError
What am I missing in the execution flow of the copy function, with regards to the __copy__ attribute?
As far as I understand, given a foo object with no attribute bar, it should behave exactly the same, whether it has a __getattr__ method that fails on bar, or it doesn't define anything.
Is this statement exact?
Short answer seems to be "no".
Within the copy.copy method we find this (summarized)
cls = type(x)
...
copier = getattr(cls, "__copy__", None)
We see that the function uses getattr on the class, not the instance.
And there's no way to have __getattr__ called when getattr is called on the class (just because it's an instance method)
Demo (sorry, I would have prefered a working demo)
class Foo:
def __getattr__(self,attr):
if attr == '__copy__':
return "WORKED"
foo = Foo()
print(getattr(foo, "__copy__",None))
print(getattr(Foo, "__copy__",None))
this returns:
WORKED
None
So it's not possible to fool the original copy.copy module into believing there's a __copy__ attribute without creating a __copy__ method.

How to make an Python subclass uncallable

How do you "disable" the __call__ method on a subclass so the following would be true:
class Parent(object):
def __call__(self):
return
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
object.__setattr__(self, '__call__', None)
>>> c = Child()
>>> callable(c)
False
This and other ways of trying to set __call__ to some non-callable value still result in the child appearing as callable.
You can't. As jonrsharpe points out, there's no way to make Child appear to not have the attribute, and that's what callable(Child()) relies on to produce its answer. Even making it a descriptor that raises AttributeError won't work, per this bug report: https://bugs.python.org/issue23990 . A python 2 example:
>>> class Parent(object):
... def __call__(self): pass
...
>>> class Child(Parent):
... __call__ = property()
...
>>> c = Child()
>>> c()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> c.__call__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> callable(c)
True
This is because callable(...) doesn't act out the descriptor protocol. Actually calling the object, or accessing a __call__ attribute, involves retrieving the method even if it's behind a property, through the normal descriptor protocol. But callable(...) doesn't bother going that far, if it finds anything at all it is satisfied, and every subclass of Parent will have something for __call__ -- either an attribute in a subclass, or the definition from Parent.
So while you can make actually calling the instance fail with any exception you want, you can't ever make callable(some_instance_of_parent) return False.
It's a bad idea to change the public interface of the class so radically from the parent to the base.
As pointed out elsewhere, you cant uninherit __call__. If you really need to mix in callable and non callable classes you should use another test (adding a class attribute) or simply making it safe to call the variants with no functionality.
To do the latter, You could override the __call__ to raise NotImplemented (or better, a custom exception of your own) if for some reason you wanted to mix a non-callable class in with the callable variants:
class Parent(object):
def __call__(self):
print "called"
class Child (Parent):
def __call__(self):
raise NotACallableInstanceException()
for child_or_parent in list_of_children_and_parents():
try:
child_or_parent()
except NotACallableInstanceException:
pass
Or, just override call with pass:
class Parent(object):
def __call__(self):
print "called"
class Child (Parent):
def __call__(self):
pass
Which will still be callable but just be a nullop.

Polymorphism with callables in Python

I have an interface class called iResource, and a number of subclasses, each of which implement the "request" method. The request functions use socket I/O to other machines, so it makes sense to run them asynchronously, so those other machines can work in parallel.
The problem is that when I start a thread with iResource.request and give it a subclass as the first argument, it'll call the superclass method. If I try to start it with "type(a).request" and "a" as the first argument, I get "" for the value of type(a). Any ideas what that means and how to get the true type of the method? Can I formally declare an abstract method in Python somehow?
EDIT: Including code.
def getSocialResults(self, query=''):
#for a in self.types["social"]: print type(a)
tasks = [type(a).request for a in self.types["social"]]
argss = [(a, query, 0) for a in self.types["social"]]
grabbers = executeChainResults(tasks, argss)
return igrabber.cycleGrabber(grabbers)
"executeChainResults" takes a list "tasks" of callables and a list "argss" of args-tuples, and assumes each returns a list. It then executes each in a separate thread, and concatenates the lists of results. I can post that code if necessary, but I haven't had any problems with it so I'll leave it out for now.
The objects "a" are DEFINITELY not of type iResource, since it has a single constructor that just throws an exception. However, replacing "type(a).request" with "iResource.request" invokes the base class method. Furthermore, calling "self.types["social"][0].request" directly works fine, but the above code gives me: "type object 'instance' has no attribute 'request'".
Uncommenting the commented line prints <type 'instance'> several times.
You can just use the bound method object itself:
tasks = [a.request for a in self.types["social"]]
# ^^^^^^^^^
grabbers = executeChainResults(tasks, [(query, 0)] * len(tasks))
# ^^^^^^^^^^^^^^^^^^^^^^^^^
If you insist on calling your methods through the base class you could also do it like this:
from abc import ABCMeta
from functools import wraps
def virtualmethod(method):
method.__isabstractmethod__ = True
#wraps(method)
def wrapper(self, *args, **kwargs):
return getattr(self, method.__name__)(*args, **kwargs)
return wrapper
class IBase(object):
__metaclass__ = ABCMeta
#virtualmethod
def my_method(self, x, y):
pass
class AddImpl(IBase):
def my_method(self, x, y):
return x + y
class MulImpl(IBase):
def my_method(self, x, y):
return x * y
items = [AddImpl(), MulImpl()]
for each in items:
print IBase.my_method(each, 3, 4)
b = IBase() # <-- crash
Result:
7
12
Traceback (most recent call last):
File "testvirtual.py", line 30, in <module>
b = IBase()
TypeError: Can't instantiate abstract class IBase with abstract methods my_method
Python doesn't support interfaces as e.g. Java does. But with the abc module you can ensure that certain methods must be implemented in subclasses. Normally you would do this with the abc.abstractmethod() decorator, but you still could not call the subclasses method through the base class, like you intend. I had a similar question once and I had the idea of the virtualmethod() decorator. It's quite simple. It essentially does the same thing as abc.abstratmethod(), but also redirects the call to the subclasses method. The specifics of the abc module can be found in the docs and in PEP3119.
BTW: I assume you're using Python >= 2.6.
The reference to "<type "instance" >" you get when you are using an "old style class" in Python - i.e.: classes not derived from the "object" type hierarchy. Old style classes are not supposed to work with several of the newer features of the language, including descriptors and others. AND, among other things, - you can't retrieve an attribute (or method) from the class of an old style class using what you are doing:
>>> class C(object):
... def c(self): pass
...
>>> type (c)
<class '__main__.C'>
>>> c = C()
>>> type(c).c
<unbound method C.c>
>>> class D: #not inheriting from object: old style class
... def d(self): pass
...
>>> d = D()
>>> type(d).d
>>> type(d)
<type 'instance'>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'instance' has no attribute 'd'
>>>
Therefore, just make your base class inherit from "object" instead of "nothing" and check if you still get the error message when requesting the "request" method from type(a) :
As for your other observation:
"The problem is that when I start a thread with iResource.request and give it a subclass as the first argument, it'll call the superclass method."
It seems that the "right" thing for it to do is exactly that:
>>> class A(object):
... def b(self):
... print "super"
...
>>> class B(A):
... def b(self):
... print "child"
...
>>> b = B()
>>> A.b(b)
super
>>>
Here, I call a method in the class "A" giving it an specialized instance of "A" - the method is still the one in class "A".

Python: Why can't I use `super` on a class?

Why can't I use super to get a method of a class's superclass?
Example:
Python 3.1.3
>>> class A(object):
... def my_method(self): pass
>>> class B(A):
... def my_method(self): pass
>>> super(B).my_method
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
super(B).my_method
AttributeError: 'super' object has no attribute 'my_method'
(Of course this is a trivial case where I could just do A.my_method, but I needed this for a case of diamond-inheritance.)
According to super's documentation, it seems like what I want should be possible. This is super's documentation: (Emphasis mine)
super() -> same as super(__class__,
<first argument>)
super(type) -> unbound super object
super(type, obj) -> bound super
object; requires isinstance(obj, type)
super(type, type2) -> bound super
object; requires issubclass(type2,
type)
[non-relevant examples redacted]
It looks as though you need an instance of B to pass in as the second argument.
http://www.artima.com/weblogs/viewpost.jsp?thread=236275
According to this it seems like I just need to call super(B, B).my_method:
>>> super(B, B).my_method
<function my_method at 0x00D51738>
>>> super(B, B).my_method is A.my_method
True

Categories