I have a subclass and I want it to not include a class attribute that's present on the base class.
I tried this, but it doesn't work:
>>> class A(object):
... x = 5
>>> class B(A):
... del x
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
class B(A):
File "<pyshell#1>", line 2, in B
del x
NameError: name 'x' is not defined
How can I do this?
You can use delattr(class, field_name) to remove it from the class definition.
You don't need to delete it. Just override it.
class B(A):
x = None
or simply don't reference it.
Or consider a different design (instance attribute?).
None of the answers had worked for me.
For example delattr(SubClass, "attrname") (or its exact equivalent, del SubClass.attrname) won't "hide" a parent method, because this is not how method resolution work. It would fail with AttributeError('attrname',) instead, as the subclass doesn't have attrname. And, of course, replacing attribute with None doesn't actually remove it.
Let's consider this base class:
class Spam(object):
# Also try with `expect = True` and with a `#property` decorator
def expect(self):
return "This is pretty much expected"
I know only two only ways to subclass it, hiding the expect attribute:
Using a descriptor class that raises AttributeError from __get__. On attribute lookup, there will be an exception, generally indistinguishable from a lookup failure.
The simplest way is just declaring a property that raises AttributeError. This is essentially what #JBernardo had suggested.
class SpanishInquisition(Spam):
#property
def expect(self):
raise AttributeError("Nobody expects the Spanish Inquisition!")
assert hasattr(Spam, "expect") == True
# assert hasattr(SpanishInquisition, "expect") == False # Fails!
assert hasattr(SpanishInquisition(), "expect") == False
However, this only works for instances, and not for the classes (the hasattr(SpanishInquisition, "expect") == True assertion would be broken).
If you want all the assertions above to hold true, use this:
class AttributeHider(object):
def __get__(self, instance, owner):
raise AttributeError("This is not the attribute you're looking for")
class SpanishInquisition(Spam):
expect = AttributeHider()
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False # Works!
assert hasattr(SpanishInquisition(), "expect") == False
I believe this is the most elegant method, as the code is clear, generic and compact. Of course, one should really think twice if removing the attribute is what they really want.
Overriding attribute lookup with __getattribute__ magic method. You can do this either in a subclass (or a mixin, like in the example below, as I wanted to write it just once), and that would hide attribute on the subclass instances. If you want to hide the method from the subclass as well, you need to use metaclasses.
class ExpectMethodHider(object):
def __getattribute__(self, name):
if name == "expect":
raise AttributeError("Nobody expects the Spanish Inquisition!")
return super().__getattribute__(name)
class ExpectMethodHidingMetaclass(ExpectMethodHider, type):
pass
# I've used Python 3.x here, thus the syntax.
# For Python 2.x use __metaclass__ = ExpectMethodHidingMetaclass
class SpanishInquisition(ExpectMethodHider, Spam,
metaclass=ExpectMethodHidingMetaclass):
pass
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False
assert hasattr(SpanishInquisition(), "expect") == False
This looks worse (more verbose and less generic) than the method above, but one may consider this approach as well.
Note, this does not work on special ("magic") methods (e.g. __len__), because those bypass __getproperty__. Check out Special Method Lookup section of the Python documentation for more details. If this is what you need to undo, just override it and call object's implementation, skipping the parent.
Needless to say, this only applies to the "new-style classes" (the ones that inherit from object), as magic methods and descriptor protocols aren't supported there. Hopefully, those are a thing of the past.
Maybe you could set x as property and raise AttributeError whenever someone try to access it.
>>> class C:
x = 5
>>> class D(C):
def foo(self):
raise AttributeError
x = property(foo)
>>> d = D()
>>> print(d.x)
File "<pyshell#17>", line 3, in foo
raise AttributeError
AttributeError
Think carefully about why you want to do this; you probably don't. Consider not making B inherit from A.
The idea of subclassing is to specialise an object. In particular, children of a class should be valid instances of the parent class:
>>> class foo(dict): pass
>>> isinstance(foo(), dict)
... True
If you implement this behaviour (with e.g. x = property(lambda: AttributeError)), you are breaking the subclassing concept, and this is Bad.
I'm had the same problem as well, and I thought I had a valid reason to delete the class attribute in the subclass: my superclass (call it A) had a read-only property that provided the value of the attribute, but in my subclass (call it B), the attribute was a read/write instance variable. I found that Python was calling the property function even though I thought the instance variable should have been overriding it. I could have made a separate getter function to be used to access the underlying property, but that seemed like an unnecessary and inelegant cluttering of the interface namespace (as if that really matters).
As it turns out, the answer was to create a new abstract superclass (call it S) with the original common attributes of A, and have A and B derive from S. Since Python has duck typing, it does not really matter that B does not extend A, I can still use them in the same places, since they implicitly implement the same interface.
Trying to do this is probably a bad idea, but...
It doesn't seem to be do this via "proper" inheritance because of how looking up B.x works by default. When getting B.x the x is first looked up in B and if it's not found there it's searched in A, but on the other hand when setting or deleting B.x only B will be searched. So for example
>>> class A:
>>> x = 5
>>> class B(A):
>>> pass
>>> B.x
5
>>> del B.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: class B has no attribute 'x'
>>> B.x = 6
>>> B.x
6
>>> del B.x
>>> B.x
5
Here we see that first we doesn't seem to be able to delete B.x since it doesn't exist (A.x exists and is what gets served when you evaluate B.x). However by setting B.x to 6 the B.x will exist, it can be retrieved by B.x and deleted by del B.x by which it ceases to exist so after that again A.x will be served as response to B.x.
What you could do on the other hand is to use metaclasses to make B.x raise AttributeError:
class NoX(type):
#property
def x(self):
raise AttributeError("We don't like X")
class A(object):
x = [42]
class B(A, metaclass=NoX):
pass
print(A.x)
print(B.x)
Now of course purists may yell that this breaks the LSP, but it's not that simple. It all boils down to if you consider that you've created a subtype by doing this. The issubclass and isinstance methods says yes, but LSP says no (and many programmers would assume "yes" since you inherit from A).
The LSP means that if B is a subtype of A then we could use B whenever we could use A, but since we can't do this while doing this construct we could conclude that B actually isn't a subtype of A and therefore LSP isn't violated.
Related
I find a good desc for python property in this link
How does the #property decorator work in Python?
below example shows how it works, while I find an exception for class attr 'name'
now I have a reload function which will raise an error
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
here is my example
class A(object):
#property
def __name__(self):
return 'dd'
a = A()
print(a.__name__)
dd
this works, however below cannot work
class B(object):
pass
def test(self):
return 'test'
B.t = property(test)
print(B.t)
B.__name__ = property(test)
<property object at 0x7f71dc5e1180>
Traceback (most recent call last):
File "<string>", line 23, in <module>
TypeError: can only assign string to B.__name__, not 'property'
Does anyone knows the difference for builtin name attr, it works if I use normal property decorator, while not works for the 2nd way. now I have a requirement to reload the function when code changes, however this error will block the reload procedure. Can anyone helps? thanks.
The short answer is: __name__ is deep magic in CPython.
So, first, let's get the technicalities out of the way. To quote what you said
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
This is correct. But it can be a bit misleading. You have this A class
class A(object):
#property
def __name__(self):
return 'dd'
And you claim that it's equivalent to this B class
class B(object):
pass
def test(self):
return 'test'
B.__name__ = property(test)
which is not correct. It's actually equivalent to this
def test(self):
return 'test'
class B(object):
__name__ = property(test)
which works and does what you expect it to. And you're also correct that, for most names in Python, your B and my B would be the same. What difference does it make whether I'm assigning to a name inside the class or immediately after its declaration? Replace __name__ with ravioli in the above snippets and either will work. So what makes __name__ special?
That's where the magic comes in. When you define a name inside the class, you're working directly on the class' internal dictionary, so
class A:
foo = 1
def bar(self):
return 1
This defines two things on the class A. One happens to be a number and the other happens to be a function (which will likely be called as a bound method). Now we can access these.
A.foo # Returns 1, simple access
A.bar # Returns the function object bar
A().foo # Returns 1
A().bar # Returns a bound method object
When we look up the names directly on A, we simply access the slots like we would on any object. However, when we look them up on A() (an instance of A), a multi-step process happens
Look up the name on the instance's __dict__ directly.
If that failed, then look up the name on the class' __dict__.
If we found it on the class, see if there's a __get__ on the result and call it.
That third step is what allows bound method objects to work, and it's also the mechanism underlying the property decorators in Python.
Let's go through this whole process with a property called ravioli. No magic here.
class A(object):
#property
def ravioli(self):
return 'dd'
When we do A().ravioli, first we see if there's a ravioli on the instance we just made. There isn't, so we check the class' __dict__, and indeed we find a property object at that position. That property object has a __get__, so we call it, and it returns 'dd', so indeed we get the string 'dd'.
>>> A().ravioli
'dd'
Now I would expect that, if I do A.ravioli, we will simply get the property object. Since we're not calling it on an instance, we don't call __get__.
>>> A.ravioli
<property object at 0x7f5bd3690770>
And indeed, we get the property object, as expected.
Now let's do the exact same thing but replace ravioli with __name__.
class A(object):
#property
def __name__(self):
return 'dd'
Great! Now let's make an instance.
>>> A().__name__
'dd'
Sensible, we looked up __name__ on A's __dict__ and found a property, so we called its __get__. Nothing weird.
Now
>>> A.__name__
'A'
Um... what? If we had just found the property on A's __dict__, then we should see that property here, right?
Well, no, not always. See, in the abstract, foo.bar normally looks in foo.__dict__ for a field called bar. But it doesn't do that if the type of foo defines a __getattribute__. If it defines that, then that method is always called instead.
Now, the type of A is type, the type of all Python types. Read that sentence a few times and make sure it makes sense. And if we do a bit of spelunking into the CPython source code, we see that type actually defines __getattribute__ and __setattr__ for the following names:
__name__
__qualname__
__bases__
__module__
__abstractmethods__
__dict__
__doc__
__text_signature__
__annotations__
That explains how __name__ can serve double duty as a property on the class instances and also as an accessible field on the same class. It also explains why you get that highly specialized error message when reassigning to B.__name__: the line
B.__name__ = property(test)
is actually equivalent to
type.__setattr__(B, '__name__', property(test))
which is calling our special-case checker in CPython.
For any other type in Python, in particular for user-defined types, we could get around this with object.__setattr__. Unfortunately,
>>> object.__setattr__(B, '__name__', property(test))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't apply this __setattr__ to type object
There's a really specific check to make sure we don't do exactly this, and the comment reads
/* Reject calls that jump over intermediate C-level overrides. */
We also can't use metaclasses to override __setattr__ and __getattribute__, because the instance lookup procedure specifically doesn't call those (in the above examples, __getattribute__ was called in every case except the one we care about for property purposes). I even tried subclassing str to trick __setattr__ into accepting our made-up value
class NameProperty(str):
def __new__(cls, value, **kwargs):
return str.__new__(cls, value)
def __init__(self, value, method):
self.method = method
def __get__(self, instance, owner):
return self.method(instance)
B.__name__ = NameProperty(B.__name__, method=test)
This actually passes the __setattr__ check, but it doesn't assign to B.__dict__ (since the __setattr__ still assigns to the actual CPython-level name, not to B.__dict__['__name__']), so the property lookup doesn't work.
So... that's how I reached my conclusion of: __name__ is deep magic in CPython. All of the usual Python metaprogramming techniques have failed, and all of the methods getting called are written deep down in C. My advice to you is: Stop using __name__ for things it's not intended for, or be prepared to write some C code and hack on CPython directly.
I am trying to understand the usefulness of the method register of abc.ABCMeta.
To my understanding, after reading https://docs.python.org/3/library/abc.html:
the following code:
class MyFoo(Complex): ...
MyFoo.register(Real)
will create the class MyFoo which implements the Complex abstract class.
After registering the MyFoo with Real, the isinstance and issubclass will return as if MyFoo was derived from Real.
What I don't understand is why the Real, doesn't get added to the mro.
I am asking since the following code will not behave as I would have expected:
def trunc(inst):
if isinstance(inst, Real):
return inst.__trunc__() #this should generate error since Complex doesn't have the __trunc__ attr
else:
return NotImplemented
trunc(MyFoo())
Shouldn't I have as a given that when isinstance returns true, then the underlying object should have all the characteristics of the class that it's checked against?
Note, if it's not obvious, i quite new to the language so please bear with me.
Using isinstance to examine properties of an object
The idea of Abstract Base Classes (or ABC) is to provide a way to override behaviour of isinstance for types which one cannot control.
For example, you may want to use isinstance to check whether an object supports addition (i.e. it is allowed to do a + b for a and b which satisfy the check).
For that, you could implement a base class and some subclasses:
class TypeWithAddition:
def __add__(self, other):
raise NotImplementedError("Override __add__ in the subclass")
class MyClass(TypeWithAddition):
def __init__(self, value):
self.value = value
def __add__(self, other):
return MyClass(self.value + other.value)
Then, you could check isinstance before attempting to sum any two objects.
For example:
a = MyClass(7)
b = MyClass(8)
if isinstance(a, TypeWithAddition) and isinstance(b, TypeWithAddition):
c = a + b
print(c.value)
else:
print('Cannot calculate a+b')
Using ABC to extend isinstance for existing classes
The problem with this approach is that it does not work for types which already exist and support addition, but are not inherited from your base class:
>>> isinstance(7, TypeWithAddition)
False
This is where ABC kicks in. One can inherit the base ("interface") class from ABC and then register any existing class with it.
class TypeWithAddition(abc.ABC):
def __add__(self, other):
raise NotImplementedError("Override __add__ in the subclass")
TypeWithAddition.register(int)
And now it looks like int is inherited from TypeWithAddition:
>>> isinstance(7, TypeWithAddition)
True
But of course, int does not really inherit from TypeWithAddition! And there is really no check that it supports everything that you would expect from TypeWithAddition. It is your (programmer's) job to make sure it does before writing TypeWithAddition.register(int).
An error:
You can easily do this, and it won't work well, of course:
class Foo:
pass
TypeWithAddition.register(Foo)
It will now seem that Foo supports addition, but it does not:
>>> Foo() + Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'Foo' and 'Foo'
It does not work because it was wrong to register Foo as a TypeWithAddition in the first place.
I hope this make things at least a bit clearer.
I know that it is not pythonic to use getters and setters in python. Rather property decorators should be used. But I am wondering about the following scenario -
I have a class initialized with a few instance attributes. Then later on I need to add other instance attributes to the class. If I don't use setters, then I have to write object.attribute = value everywhere outside the class. The class will not have the self.attribute code. Won't this become a problem when I need to track the attributes of the class (because they are strewn all over the code outside the class)?
In general, you shouldn't even use properties. Simple attributes work just fine in the vast majority of cases:
class X:
pass
>>> x = X()
>>> x.a
Traceback (most recent call last):
# ... etc
AttributeError: 'X' object has no attribute 'a'
>>> x.a = 'foo'
>>> x.a
'foo'
A property should only be used if you need to do some work when accessing an attribute:
import random
class X:
#property
def a(self):
return random.random()
>>> x = X()
>>> x.a
0.8467160913203089
If you also need to be able to assign to a property, defining a setter is straightforward:
class X:
#property
def a(self):
# do something clever
return self._a
#a.setter
def a(self, value):
# do something even cleverer
self._a = value
>>> x = X()
>>> x.a
Traceback (most recent call last):
# ... etc
AttributeError: 'X' object has no attribute '_a'
>>> x.a = 'foo'
>>> x.a
'foo'
Notice that in each case, the way that client code accesses the attribute or property is exactly the same. There's no need to "future-proof" your class against the possibility that at some point you might want to do something more complex than simple attribute access, so no reason to write properties, getters or setters unless you actually need them right now.
For more on the differences between idiomatic Python and some other languages when it comes to properties, getters and setters, see:
Why don't you want getters and setters?
Python is not Java (especially the "Getters and setters are evil" section)
Class definition:
class A(object):
def foo(self):
print "A"
class B(object):
def foo(self):
print "B"
class C(A, B):
def foo(self):
print "C"
Output:
>>> super(C)
<super: <class 'C'>, NULL>
>>> super(C).foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'super' object has no attribute 'foo'
What is the use of super(type) if we can't access attributes of a class?
super(type) is an "unbound" super object. The docs on super discuss that, but don't really elaborate what an "unbound" super object is or does. It is simply a fact of the language that you cannot use them in the manner you are attempting to use them.
This is perhaps what you want:
>>> super(C, C).foo is B.foo
True
That said, what good is an unbound super object? I had to look this up, myself, and found a decent answer here. Note however, that the article's conclusion is that unbound super is a language wart, has no practical use, and should be removed from the language (and having read the article, I agree). The article's explanation on unbound super starts with:
Unbound super objects must be turned into bound objects in order to make them to dispatch properly. That can be done via the descriptor protocol. For instance, I can convert super(C1) in a super object bound to c1 in this way:
>>> c1 = C1()
>>> boundsuper = super(C1).__get__(c1, C1) # this is the same as super(C1, c1)
So, that doesn't seem useful, but the article goes on:
Having established that the unbound syntax does not return unbound methods one might ask what its purpose is. The answer is that super(C) is intended to be used as an attribute in other classes. Then the descriptor magic will automatically convert the unbound syntax in the bound syntax. For instance:
>>> class B(object):
... a = 1
>>> class C(B):
... pass
>>> class D(C):
... sup = super(C)
>>> d = D()
>>> d.sup.a
1
This works since d.sup.a calls super(C).__get__(d,D).a which is turned into super(C, d).a and retrieves B.a.
There is a single use case for the single argument syntax of super that I am aware of, but I think it gives more troubles than advantages. The use case is the implementation of autosuper made by Guido on his essay about new-style classes.
The idea there is to use the unbound super objects as private attributes. For instance, in our example, we could define the private attribute __sup in the class C as the unbound super object super(C):
>>> C._C__sup = super(C)
But do note that the article continues to describe the problems with this (it doesn't quite work correctly, I think mostly due to the fact that the MRO is dependent on the class of the instance you are dealing with, and thus given an instance, some class X's superclass may be different depending on the instance of X we are given).
To accomplish what you want, you need to call it like this:
>>> myC = C()
>>> super(C,myC).foo()
A
Note that there is a NULL reference to some object. It basically needs a class and an instance of a related object for it to function.
>>> super(C, C()).foo
<bound method C.foo of <__main__.C object at 0x225dc50>>
See: Understanding Python super() with __init__() methods for more details.
class a_class:
def __getattr__(self, name):
# if called by hasattr(a, 'b') not by a.b
# print("I am called by hasattr")
print(name)
a = a_class()
a.b_attr
hasattr(a, 'c_attr')
Please take a look the comment inside __getattr__. How do I do that? I am using Python 3. The reason is I want to create attribute dynamically but I don't want to do that when using hasattr. Thanks.
You can't, without cheating. As the documentation says:
This [that is, hasattr] is implemented by calling getattr(object, name) and seeing whether it raises an exception or not.
In other words, you can't block hasattr without also blocking getattr, which basically means you can't block hasattr at all if you care about accessing attributes.
By "cheating" I mean one of these solutions that clever people like to post on here that involve an end-run around essentially all of Python. They typically involve reassigning builtins, inspecting/manipulating the call stack, using introspection to peek at the literal source code, modifying "secret" internal attributes of objects, and so on. For instance, you could look at the call stack to see if hasattr is in the call chain. This type of solution is possible, but extremely fragile, with possibility of breaking in future Python versions, on non-CPython implementations, or in situations where another equally ugly and devious hack is also being used.
You can see a similar question and some discussion here.
This discussion applies to Python 3. (turns out it works on Python 2.7 as well)
Not exactly the way you described but the following points might help:
__getattr__ will only be accessed when attribute is not found under normal way
hasattr() check if AttributeError is raised
See if the following code help!
>>> class A:
... def __init__(self, a=1, b=2):
... self.a = a
... self.b = b
...
... def __getattr__(self, name):
... print('calling __getattr__')
... print('This is instance attributes: {}'.format(self.__dict__))
...
... if name not in ('c', 'd'):
... raise AttributeError()
... else:
... return 'My Value'
... return 'Default'
>>>
>>> a = A()
>>> print('a = {}'.format(a.a))
a = 1
>>> print('c = {}'.format(a.c))
calling __getattr__
This is instance attributes: {'a': 1, 'b': 2}
c = My Value
>>> print('hasattr(a, "e") returns {}'.format(hasattr(a, 'e')))
calling __getattr__
This is instance attributes: {'a': 1, 'b': 2}
hasattr(a, "e") returns False
>>>