This is more of a Python 2 question, but I'm curious about whether there are any differences in Python 3 as well.
I noticed that when creating certain methods on class (whether they are new-style or not), Python automatically decides that those classes are instances of some classes from the collections module. My example below demonstrates this with collections.Callable.
>>> import collections
>>> class A:
def __call__(self):
print "A"
>>> a = A()
>>> isinstance(a, collections.Callable)
True
>>> class A(object):
def __call__(self):
print "A"
>>> a = A()
>>> isinstance(a, collections.Callable)
True
>>> class A(object):
pass
>>> a = A()
>>> isinstance(a, collections.Callable)
False
>>> class A:
pass
>>> a = A()
>>> isinstance(a, collections.Callable)
False
You'll notice that I haven't explicitly made any of those classes inherit from collections.Callable, but for some reason they all do as long as they create a __call__ method. Is this done for a specific purpose and is it well defined somewhere? Is Python automatically giving classes certain base classes just for defining methods or is something else going on?
You'll get similar results for collections.Iterable and the __iter__ method and some other special methods as well.
The ABCMeta class implements hooks to customize instance and subclass checks; both __instancecheck__() and __subclasscheck__() are provided.
These hooks delegate to the ABCMeta.__subclasshook__() method:
__subclasshook__(subclass)
Check whether subclass is considered a subclass of this ABC. This means that you can customize the behavior of issubclass further without the need to call register() on every class you want to consider a subclass of the ABC.
This hook then can check if a given subclass implements the expected methods; for Callable the implementation simply has to see if there is a __call__ method present:
#classmethod
def __subclasshook__(cls, C):
if cls is Callable:
if _hasattr(C, "__call__"):
return True
return NotImplemented
This is restricted to the specific Callable class; subclasses will have to implement their own version as you most likely would add methods.
Related
I am building up my understanding of Python, and recently I understood that functions must be classes(?), and that a def func(): just instantiates an object of class function. I was mindblown when I created an attribute of func honestly.
Lurking in dir(func) I noticed that indeed all the special methods such as .__eq__ are inherited, and I wanted to play around with it:
def func(n):
print(n)
def __eq__(self,func2):
print('hello')
However, it does not work:
>>> func.__eq__(print)
NotImplemented
What would it be the proper way to overload the equality operator for a function? I don't see how to overload it without having a proper class definition.
Because python treats all functions as objects, it might be worth thinking of creating functions as creating instances of the class function (which isn't documented) with the __call__ method being the body of the function created with def. The actual C source of the function class is on Github if you want to know implementation details.
With returning NotImplemented:
In Python, if objects do not override __eq__ or __hash__, there is default implementations where __hash__ = builtins.id and __eq__ is like lambda self, other: self is other. When the comparison operators return NotImplemented, this instructs the runtime to search for another method that does the same thing, like trying __ne__ instead of __eq__, or trying operators from the parent.
>>> def test(a):
... return a
...
>>> def test2(a):
... return a
...
>>> test == test2
False
>>> test.__eq__(test2)
NotImplemented
You can also test this by creating a dummy class that doesn't override __eq__ (like how the function class doesn't):
>>> class testcls:
... pass
...
>>> t1 = testcls()
>>> t2 = testcls()
>>> t1.__eq__(t2)
NotImplemented
>>> t1.__eq__(t1)
True
>>> t1 == t2
False
No, functions don't have to be classes (yet they are in Python), actually. But that is another story.
Seems you misunderstand that __eq__ is a method, i.e. you need a class that it belongs to in order to overload it. What you code does is defines some custom function within another function, which is valid Python code but has nothing to do with __eq__ overloading.
Since you can't inherit function, you can't overwrite it's eq. And even if you could do that, it won't work due to some purely theoretical reasons.
from pydocs
Like its identity, an object’s type is also unchangeable. [1]
from footnote
It is possible in some cases to change an object’s type, under certain controlled conditions. It generally isn’t a good idea though, since it can lead to some very strange behaviour if it is handled incorrectly.
What are the cases when we can change the object's type and how to change it
class A:
pass
class B:
pass
a = A()
isinstance(a, A) # True
isinstance(a, B) # False
a.__class__ # __main__.A
# changing the class
a.__class__ = B
isinstance(a, A) # False
isinstance(a, B) # True
a.__class__ # __main__.B
However, I can't recall real-world examples where it can be helpful. Usually, class manipulations are done at the moment of creating the class (not an object of the class) by decorators or metaclasses. For example, dataclasses.dataclass is a decorator that takes a class and constructs another class on the base of it (see the source code).
What are the advantages of using MethodType from the types module? You can use it to add methods to an object. But we can do that easily without it:
def func():
print 1
class A:
pass
obj = A()
obj.func = func
It works even if we delete func in the main scope by running del func.
Why would one want to use MethodType? Is it just a convention or a good programming habit?
In fact the difference between adding methods dynamically at run time and
your example is huge:
in your case, you just attach a function to an object, you can call it of course but it is unbound, it has no relation with the object itself (ie. you cannot use self inside the function)
when added with MethodType, you create a bound method and it behaves like a normal Python method for the object, you have to take the object it belongs to as first argument (it is normally called self) and you can access it inside the function
This example shows the difference:
def func(obj):
print 'I am called from', obj
class A:
pass
a=A()
a.func=func
a.func()
This fails with a TypeError: func() takes exactly 1 argument (0 given),
whereas this code works as expected:
import types
a.func = types.MethodType(func, a) # or types.MethodType(func, a, A) for PY2
a.func()
shows I am called from <__main__.A instance at xxx>.
A common use of types.MethodType is checking whether some object is a method. For example:
>>> import types
>>> class A(object):
... def method(self):
... pass
...
>>> isinstance(A().method, types.MethodType)
True
>>> def nonmethod():
... pass
...
>>> isinstance(nonmethod, types.MethodType)
False
Note that in your example isinstance(obj.func, types.MethodType) returns False. Imagine you have defined a method meth in class A. isinstance(obj.meth, types.MethodType) would return True.
I have a subclass and I want it to not include a class attribute that's present on the base class.
I tried this, but it doesn't work:
>>> class A(object):
... x = 5
>>> class B(A):
... del x
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
class B(A):
File "<pyshell#1>", line 2, in B
del x
NameError: name 'x' is not defined
How can I do this?
You can use delattr(class, field_name) to remove it from the class definition.
You don't need to delete it. Just override it.
class B(A):
x = None
or simply don't reference it.
Or consider a different design (instance attribute?).
None of the answers had worked for me.
For example delattr(SubClass, "attrname") (or its exact equivalent, del SubClass.attrname) won't "hide" a parent method, because this is not how method resolution work. It would fail with AttributeError('attrname',) instead, as the subclass doesn't have attrname. And, of course, replacing attribute with None doesn't actually remove it.
Let's consider this base class:
class Spam(object):
# Also try with `expect = True` and with a `#property` decorator
def expect(self):
return "This is pretty much expected"
I know only two only ways to subclass it, hiding the expect attribute:
Using a descriptor class that raises AttributeError from __get__. On attribute lookup, there will be an exception, generally indistinguishable from a lookup failure.
The simplest way is just declaring a property that raises AttributeError. This is essentially what #JBernardo had suggested.
class SpanishInquisition(Spam):
#property
def expect(self):
raise AttributeError("Nobody expects the Spanish Inquisition!")
assert hasattr(Spam, "expect") == True
# assert hasattr(SpanishInquisition, "expect") == False # Fails!
assert hasattr(SpanishInquisition(), "expect") == False
However, this only works for instances, and not for the classes (the hasattr(SpanishInquisition, "expect") == True assertion would be broken).
If you want all the assertions above to hold true, use this:
class AttributeHider(object):
def __get__(self, instance, owner):
raise AttributeError("This is not the attribute you're looking for")
class SpanishInquisition(Spam):
expect = AttributeHider()
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False # Works!
assert hasattr(SpanishInquisition(), "expect") == False
I believe this is the most elegant method, as the code is clear, generic and compact. Of course, one should really think twice if removing the attribute is what they really want.
Overriding attribute lookup with __getattribute__ magic method. You can do this either in a subclass (or a mixin, like in the example below, as I wanted to write it just once), and that would hide attribute on the subclass instances. If you want to hide the method from the subclass as well, you need to use metaclasses.
class ExpectMethodHider(object):
def __getattribute__(self, name):
if name == "expect":
raise AttributeError("Nobody expects the Spanish Inquisition!")
return super().__getattribute__(name)
class ExpectMethodHidingMetaclass(ExpectMethodHider, type):
pass
# I've used Python 3.x here, thus the syntax.
# For Python 2.x use __metaclass__ = ExpectMethodHidingMetaclass
class SpanishInquisition(ExpectMethodHider, Spam,
metaclass=ExpectMethodHidingMetaclass):
pass
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False
assert hasattr(SpanishInquisition(), "expect") == False
This looks worse (more verbose and less generic) than the method above, but one may consider this approach as well.
Note, this does not work on special ("magic") methods (e.g. __len__), because those bypass __getproperty__. Check out Special Method Lookup section of the Python documentation for more details. If this is what you need to undo, just override it and call object's implementation, skipping the parent.
Needless to say, this only applies to the "new-style classes" (the ones that inherit from object), as magic methods and descriptor protocols aren't supported there. Hopefully, those are a thing of the past.
Maybe you could set x as property and raise AttributeError whenever someone try to access it.
>>> class C:
x = 5
>>> class D(C):
def foo(self):
raise AttributeError
x = property(foo)
>>> d = D()
>>> print(d.x)
File "<pyshell#17>", line 3, in foo
raise AttributeError
AttributeError
Think carefully about why you want to do this; you probably don't. Consider not making B inherit from A.
The idea of subclassing is to specialise an object. In particular, children of a class should be valid instances of the parent class:
>>> class foo(dict): pass
>>> isinstance(foo(), dict)
... True
If you implement this behaviour (with e.g. x = property(lambda: AttributeError)), you are breaking the subclassing concept, and this is Bad.
I'm had the same problem as well, and I thought I had a valid reason to delete the class attribute in the subclass: my superclass (call it A) had a read-only property that provided the value of the attribute, but in my subclass (call it B), the attribute was a read/write instance variable. I found that Python was calling the property function even though I thought the instance variable should have been overriding it. I could have made a separate getter function to be used to access the underlying property, but that seemed like an unnecessary and inelegant cluttering of the interface namespace (as if that really matters).
As it turns out, the answer was to create a new abstract superclass (call it S) with the original common attributes of A, and have A and B derive from S. Since Python has duck typing, it does not really matter that B does not extend A, I can still use them in the same places, since they implicitly implement the same interface.
Trying to do this is probably a bad idea, but...
It doesn't seem to be do this via "proper" inheritance because of how looking up B.x works by default. When getting B.x the x is first looked up in B and if it's not found there it's searched in A, but on the other hand when setting or deleting B.x only B will be searched. So for example
>>> class A:
>>> x = 5
>>> class B(A):
>>> pass
>>> B.x
5
>>> del B.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: class B has no attribute 'x'
>>> B.x = 6
>>> B.x
6
>>> del B.x
>>> B.x
5
Here we see that first we doesn't seem to be able to delete B.x since it doesn't exist (A.x exists and is what gets served when you evaluate B.x). However by setting B.x to 6 the B.x will exist, it can be retrieved by B.x and deleted by del B.x by which it ceases to exist so after that again A.x will be served as response to B.x.
What you could do on the other hand is to use metaclasses to make B.x raise AttributeError:
class NoX(type):
#property
def x(self):
raise AttributeError("We don't like X")
class A(object):
x = [42]
class B(A, metaclass=NoX):
pass
print(A.x)
print(B.x)
Now of course purists may yell that this breaks the LSP, but it's not that simple. It all boils down to if you consider that you've created a subtype by doing this. The issubclass and isinstance methods says yes, but LSP says no (and many programmers would assume "yes" since you inherit from A).
The LSP means that if B is a subtype of A then we could use B whenever we could use A, but since we can't do this while doing this construct we could conclude that B actually isn't a subtype of A and therefore LSP isn't violated.
Is there anyway to discover the base class of a class in Python?
For given the following class definitions:
class A:
def speak(self):
print "Hi"
class B(A):
def getName(self):
return "Bob"
If I received an instance of an object I can easily work out that it is a B by doing the following:
instance = B()
print B.__class__.__name__
Which prints the class name 'B' as expected.
Is there anyway to discover that the instance of an object inherits from a base class as well as the actual class?
Or is that just not how objects in Python work?
The inspect module is really powerful also:
>>> import inspect
>>> inst = B()
>>> inspect.getmro(inst.__class__)
(<class __main__.B at 0x012B42A0>, <class __main__.A at 0x012B4210>)
b = B()
b.__class__
b.__class__.__base__
b.__class__.__bases__
b.__class__.__base__.__subclasses__()
I strongly recommend checking out ipython and use the tab completion :-)
Another way to get the class hierarchy is to access the mro attribute:
class A(object):
pass
class B(A):
pass
instance = B()
print(instance.__class__.__mro__)
# (<class '__main__.B'>, <class '__main__.A'>, <type 'object'>)
Note that in Python 2.x, you must use "new-style" objects to ensure they have the
mro attribute. You do this by declaring
class A(object):
instead of
class A():
See http://www.python.org/doc/newstyle/ and http://www.python.org/download/releases/2.3/mro/ for more info new-style objects and mro (method resolution order).
In Python 3.x, all objects are new-style objects, so you can use mro and simply declare objects this way:
class A():
If instead of discovery of the baseclass (i.e reflection) you know the desired class in advance you could use the following built in functions
eg:
# With classes from original Question defined
>>> instance = A()
>>> B_instance = B()
>>> isinstance(instance, A)
True
>>> isinstance(instance, B)
False
>>> isinstance(B_instance, A) # Note it returns true if instance is a subclass
True
>>> isinstance(B_instance, B)
True
>>> issubclass(B, A)
True
isinstance( object, classinfo)
Return true if the object argument is an instance of the classinfo
argument, or of a (direct or indirect)
subclass thereof. Also return true if
classinfo is a type object and object
is an object of that type. If object
is not a class instance or an object
of the given type, the function always
returns false. If classinfo is neither
a class object nor a type object, it
may be a tuple of class or type
objects, or may recursively contain
other such tuples (other sequence
types are not accepted). If classinfo
is not a class, type, or tuple of
classes, types, and such tuples, a
TypeError exception is raised. Changed
in version 2.2: Support for a tuple of
type information was added.
issubclass( class, classinfo)
Return true if class is a subclass (direct or indirect) of classinfo. A
class is considered a subclass of
itself. classinfo may be a tuple of
class objects, in which case every
entry in classinfo will be checked. In
any other case, a TypeError exception
is raised. Changed in version 2.3:
Support for a tuple of type
information was added.