class a_class:
def __getattr__(self, name):
# if called by hasattr(a, 'b') not by a.b
# print("I am called by hasattr")
print(name)
a = a_class()
a.b_attr
hasattr(a, 'c_attr')
Please take a look the comment inside __getattr__. How do I do that? I am using Python 3. The reason is I want to create attribute dynamically but I don't want to do that when using hasattr. Thanks.
You can't, without cheating. As the documentation says:
This [that is, hasattr] is implemented by calling getattr(object, name) and seeing whether it raises an exception or not.
In other words, you can't block hasattr without also blocking getattr, which basically means you can't block hasattr at all if you care about accessing attributes.
By "cheating" I mean one of these solutions that clever people like to post on here that involve an end-run around essentially all of Python. They typically involve reassigning builtins, inspecting/manipulating the call stack, using introspection to peek at the literal source code, modifying "secret" internal attributes of objects, and so on. For instance, you could look at the call stack to see if hasattr is in the call chain. This type of solution is possible, but extremely fragile, with possibility of breaking in future Python versions, on non-CPython implementations, or in situations where another equally ugly and devious hack is also being used.
You can see a similar question and some discussion here.
This discussion applies to Python 3. (turns out it works on Python 2.7 as well)
Not exactly the way you described but the following points might help:
__getattr__ will only be accessed when attribute is not found under normal way
hasattr() check if AttributeError is raised
See if the following code help!
>>> class A:
... def __init__(self, a=1, b=2):
... self.a = a
... self.b = b
...
... def __getattr__(self, name):
... print('calling __getattr__')
... print('This is instance attributes: {}'.format(self.__dict__))
...
... if name not in ('c', 'd'):
... raise AttributeError()
... else:
... return 'My Value'
... return 'Default'
>>>
>>> a = A()
>>> print('a = {}'.format(a.a))
a = 1
>>> print('c = {}'.format(a.c))
calling __getattr__
This is instance attributes: {'a': 1, 'b': 2}
c = My Value
>>> print('hasattr(a, "e") returns {}'.format(hasattr(a, 'e')))
calling __getattr__
This is instance attributes: {'a': 1, 'b': 2}
hasattr(a, "e") returns False
>>>
Related
I find a good desc for python property in this link
How does the #property decorator work in Python?
below example shows how it works, while I find an exception for class attr 'name'
now I have a reload function which will raise an error
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
here is my example
class A(object):
#property
def __name__(self):
return 'dd'
a = A()
print(a.__name__)
dd
this works, however below cannot work
class B(object):
pass
def test(self):
return 'test'
B.t = property(test)
print(B.t)
B.__name__ = property(test)
<property object at 0x7f71dc5e1180>
Traceback (most recent call last):
File "<string>", line 23, in <module>
TypeError: can only assign string to B.__name__, not 'property'
Does anyone knows the difference for builtin name attr, it works if I use normal property decorator, while not works for the 2nd way. now I have a requirement to reload the function when code changes, however this error will block the reload procedure. Can anyone helps? thanks.
The short answer is: __name__ is deep magic in CPython.
So, first, let's get the technicalities out of the way. To quote what you said
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
This is correct. But it can be a bit misleading. You have this A class
class A(object):
#property
def __name__(self):
return 'dd'
And you claim that it's equivalent to this B class
class B(object):
pass
def test(self):
return 'test'
B.__name__ = property(test)
which is not correct. It's actually equivalent to this
def test(self):
return 'test'
class B(object):
__name__ = property(test)
which works and does what you expect it to. And you're also correct that, for most names in Python, your B and my B would be the same. What difference does it make whether I'm assigning to a name inside the class or immediately after its declaration? Replace __name__ with ravioli in the above snippets and either will work. So what makes __name__ special?
That's where the magic comes in. When you define a name inside the class, you're working directly on the class' internal dictionary, so
class A:
foo = 1
def bar(self):
return 1
This defines two things on the class A. One happens to be a number and the other happens to be a function (which will likely be called as a bound method). Now we can access these.
A.foo # Returns 1, simple access
A.bar # Returns the function object bar
A().foo # Returns 1
A().bar # Returns a bound method object
When we look up the names directly on A, we simply access the slots like we would on any object. However, when we look them up on A() (an instance of A), a multi-step process happens
Look up the name on the instance's __dict__ directly.
If that failed, then look up the name on the class' __dict__.
If we found it on the class, see if there's a __get__ on the result and call it.
That third step is what allows bound method objects to work, and it's also the mechanism underlying the property decorators in Python.
Let's go through this whole process with a property called ravioli. No magic here.
class A(object):
#property
def ravioli(self):
return 'dd'
When we do A().ravioli, first we see if there's a ravioli on the instance we just made. There isn't, so we check the class' __dict__, and indeed we find a property object at that position. That property object has a __get__, so we call it, and it returns 'dd', so indeed we get the string 'dd'.
>>> A().ravioli
'dd'
Now I would expect that, if I do A.ravioli, we will simply get the property object. Since we're not calling it on an instance, we don't call __get__.
>>> A.ravioli
<property object at 0x7f5bd3690770>
And indeed, we get the property object, as expected.
Now let's do the exact same thing but replace ravioli with __name__.
class A(object):
#property
def __name__(self):
return 'dd'
Great! Now let's make an instance.
>>> A().__name__
'dd'
Sensible, we looked up __name__ on A's __dict__ and found a property, so we called its __get__. Nothing weird.
Now
>>> A.__name__
'A'
Um... what? If we had just found the property on A's __dict__, then we should see that property here, right?
Well, no, not always. See, in the abstract, foo.bar normally looks in foo.__dict__ for a field called bar. But it doesn't do that if the type of foo defines a __getattribute__. If it defines that, then that method is always called instead.
Now, the type of A is type, the type of all Python types. Read that sentence a few times and make sure it makes sense. And if we do a bit of spelunking into the CPython source code, we see that type actually defines __getattribute__ and __setattr__ for the following names:
__name__
__qualname__
__bases__
__module__
__abstractmethods__
__dict__
__doc__
__text_signature__
__annotations__
That explains how __name__ can serve double duty as a property on the class instances and also as an accessible field on the same class. It also explains why you get that highly specialized error message when reassigning to B.__name__: the line
B.__name__ = property(test)
is actually equivalent to
type.__setattr__(B, '__name__', property(test))
which is calling our special-case checker in CPython.
For any other type in Python, in particular for user-defined types, we could get around this with object.__setattr__. Unfortunately,
>>> object.__setattr__(B, '__name__', property(test))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't apply this __setattr__ to type object
There's a really specific check to make sure we don't do exactly this, and the comment reads
/* Reject calls that jump over intermediate C-level overrides. */
We also can't use metaclasses to override __setattr__ and __getattribute__, because the instance lookup procedure specifically doesn't call those (in the above examples, __getattribute__ was called in every case except the one we care about for property purposes). I even tried subclassing str to trick __setattr__ into accepting our made-up value
class NameProperty(str):
def __new__(cls, value, **kwargs):
return str.__new__(cls, value)
def __init__(self, value, method):
self.method = method
def __get__(self, instance, owner):
return self.method(instance)
B.__name__ = NameProperty(B.__name__, method=test)
This actually passes the __setattr__ check, but it doesn't assign to B.__dict__ (since the __setattr__ still assigns to the actual CPython-level name, not to B.__dict__['__name__']), so the property lookup doesn't work.
So... that's how I reached my conclusion of: __name__ is deep magic in CPython. All of the usual Python metaprogramming techniques have failed, and all of the methods getting called are written deep down in C. My advice to you is: Stop using __name__ for things it's not intended for, or be prepared to write some C code and hack on CPython directly.
A callable object is supposed to be so by defining __call__. A class is supposed to be an object… or at least with some exceptions. This exception is what I'm failing to formally clarify, thus this question posted here.
Let A be a simple class:
class A(object):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `__call__`"
The first function is purposely named “call”, to make clear the purpose is the comparison with the other.
Let's instantiate it and forget about the expression it implies:
a = A() # Think of it as `a = magic` and forget about `A()`
Now what's worth:
print(A.call())
print(a.call())
print(A())
print(a())
Result in:
>>> In `call`
>>> In `call`
>>> <__main__.A object at 0xNNNNNNNN>
>>> In `__call__`
The output (third statement not running __call__) does not come as a surprise, but when I think every where it is said “Python class are objects”…
This, more explicit, however run __call__
print(A.__call__())
print(a.__call__())
>>> “In `__call__`”
>>> “In `__call__`”
All of this is just to show how finally A() may looks strange.
There are exception in Python rules, but the documentation about “object.call” does not say a lot about __call__… not more than that:
3.3.5. Emulating callable objects
object.__call__(self[, args...])
Called when the instance is “called” as a function; […]
But how do Python tell “it's called as a function” and honour or not the object.__call__ rule?
This could be a matter of type, but even type has object as its base class.
Where can I learn more (and formally) about it?
By the way, is there any difference here between Python 2 and Python 3?
----- %< ----- edit ----- >% -----
Conclusions and other experiments after one answer and one comment
Update #1
After #Veedrac's answer and #chepner's comment, I came to this other test, which complete the comments from both:
class M(type):
def __call__(*args):
return "In `M.__call__`"
class A(object, metaclass=M):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `A.__call__`"
print(A())
The result is:
>>> In `M.__call__`
So it seems that's the meta‑class which drives the “call” operations. If I understand correctly, the meta‑class does not matter only with class, but also with classes instances.
Update #2
Another relevant test, which shows this is not an attribute of the object which matters, but an attribute of the type of the object:
class A(object):
def __call__(*args):
return "In `A.__call__`"
def call2(*args):
return "In `call2`"
a = A()
print(a())
As expected, it prints:
>>> In `A.__call__`
Now this:
a.__call__ = call2
print(a())
It prints:
>>> In `A.__call__`
The same a before the attribute was assigned. It does not print In call2, it's still In A.__call__. That's important to note and also explain why that's the __call__ of the meta‑class which was invoked (keep in mind the meta‑class is the type of the class object). The __call__ used to call as function, is not from the object, it's from its type.
x(*args, **kwargs) is the same as type(x).__call__(x, *args, **kwargs).
So you have
>>> type(A).__call__(A)
<__main__.A object at 0x7f4d88245b50>
and it all makes sense.
chepner points out in the comments that type(A) == type. This is kind-of wierd, because type(A)(A) just gives type again! But remember that we're instead using type(A).__call__(A) which is not the same.
So this resolves to type.__call__(A). This is the constructor function for classes, which builds the data-structures and does all the construction magic.
The same is true of most dunder (double underscore) methods, such as __eq__. This is partially an optimisation in those cases.
Python 2.7 docs for weakref module say this:
Not all objects can be weakly referenced; those objects which can
include class instances, functions written in Python (but not in C),
methods (both bound and unbound), ...
And Python 3.3 docs for weakref module say this:
Not all objects can be weakly referenced; those objects which can
include class instances, functions written in Python (but not in C),
instance methods, ...
To me, these indicate that weakrefs to bound methods (in all versions Python 2.7 - 3.3) should be good, and that weakrefs to unbound methods should be good in Python 2.7.
Yet in Python 2.7, creating a weakref to a method (bound or unbound) results in a dead weakref:
>>> def isDead(wr): print 'dead!'
...
>>> class Foo:
... def bar(self): pass
...
>>> wr=weakref.ref(Foo.bar, isDead)
dead!
>>> wr() is None
True
>>> foo=Foo()
>>> wr=weakref.ref(foo.bar, isDead)
dead!
>>> wr() is None
True
Not what I would have expected based on the docs.
Similarly, in Python 3.3, a weakref to a bound method dies on creation:
>>> wr=weakref.ref(Foo.bar, isDead)
>>> wr() is None
False
>>> foo=Foo()
>>> wr=weakref.ref(foo.bar, isDead)
dead!
>>> wr() is None
True
Again not what I would have expected based on the docs.
Since this wording has been around since 2.7, it's surely not an oversight. Can anyone explain how the statements and the observed behavior are in fact not in contradiction?
Edit/Clarification: In other words, the statement for 3.3 says "instance methods can be weak referenced"; doesn't this mean that it is reasonable to expect that weakref.ref(an instance method)() is not None? and if it None, then "instance methods" should not be listed among the types of objects that can be weak referenced?
Foo.bar produces a new unbound method object every time you access it, due to some gory details about descriptors and how methods happen to be implemented in Python.
The class doesn't own unbound methods; it owns functions. (Check out Foo.__dict__['bar'].) Those functions just happen to have a __get__ which returns an unbound-method object. Since nothing else holds a reference, it vanishes as soon as you're done creating the weakref. (In Python 3, the rather unnecessary extra layer goes away, and an "unbound method" is just the underlying function.)
Bound methods work pretty much the same way: the function's __get__ returns a bound-method object, which is really just partial(function, self). You get a new one every time, so you see the same phenomenon.
You can store a method object and keep a reference to that, of course:
>>> def is_dead(wr): print "blech"
...
>>> class Foo(object):
... def bar(self): pass
...
>>> method = Foo.bar
>>> wr = weakref.ref(method, is_dead)
>>> 1 + 1
2
>>> method = None
blech
This all seems of dubious use, though :)
Note that if Python didn't spit out a new method instance on every attribute access, that'd mean that classes refer to their methods and methods refer to their classes. Having such cycles for every single method on every single instance in the entire program would make garbage collection way more expensive—and before 2.1, Python didn't even have cycle collection, so they would've stuck around forever.
#Eevee's answer is correct but there is a subtlety that is important.
The Python docs state that instance methods (py3k) and un/bound methods (py2.4+) can be weak referenced. You'd expect (naively, as I did) that weakref.ref(foo.bar)() would therefore be non-None, yet it is None, making the weak ref "dead on arrival" (DOA). This lead to my question, if the weakref to an instance method is DOA, why do the docs say you can weak ref a method?
So as #Eevee showed, you can create a non-dead weak reference to an instance method, by creating a strong reference to the method object which you give to weakref:
m = foo.bar # creates a *new* instance method "Foo.bar" and strong refs it
wr = weakref.ref(m)
assert wr() is not None # success
The subtlety (to me, anyways) is that a new instance method object is created every time you use Foo.bar, so even after the above code is run, the following will fail:
wr = weakref.ref(foo.bar)
assert wr() is not None # fails
because foo.bar is new instance of the "Foo instance" foo's "bar" method, different from m, and there is no strong ref to this new instance, so it is immediately gc'd, even if you have created a strong reference to it earlier (it is not the same strong ref). To be clear,
>>> d1 = foo.bla # assume bla is a data member
>>> d2 = foo.bla # assume bla is a data member
>>> d1 is d2
True # which is what you expect
>>> m1 = foo.bar # assume bar is an instance method
>>> m2 = foo.bar
>>> m1 is m2
False # !!! counter-intuitive
This takes many people by surprise since no one expects access to an instance member to be creating a new instance of anything. For example, if foo.bla is a data member of foo, then using foo.bla in your code does not create a new instance of the object referenced by foo.bla. Now if bla is a "function", foo.bla does create a new instance of type "instance method" representing the bound function.
Why the weakref docs (since python 2.4!) don't point that out is very strange, but that's a separate issue.
While I see that there's an accepted answer as to why this should be so, from a simple use-case situation wherein one would like an object that acts as a weakref to a bound method, I believe that one might be able to sneak by with an object as such. It's kind of a runt compared to some of the 'codier' things out there, but it works.
from weakref import proxy
class WeakMethod(object):
"""A callable object. Takes one argument to init: 'object.method'.
Once created, call this object -- MyWeakMethod() --
and pass args/kwargs as you normally would.
"""
def __init__(self, object_dot_method):
self.target = proxy(object_dot_method.__self__)
self.method = proxy(object_dot_method.__func__)
###Older versions of Python can use 'im_self' and 'im_func' in place of '__self__' and '__func__' respectively
def __call__(self, *args, **kwargs):
"""Call the method with args and kwargs as needed."""
return self.method(self.target, *args, **kwargs)
As an example of its ease of use:
class A(object):
def __init__(self, name):
self.name = name
def foo(self):
return "My name is {}".format(self.name)
>>> Stick = A("Stick")
>>> WeakFoo = WeakMethod(Stick.foo)
>>> WeakFoo()
'My name is Stick'
>>> Stick.name = "Dave"
>>> WeakFoo()
'My name is Dave'
Note that evil trickery will cause this to blow up, so depending on how you'd prefer it to work this may not be the best solution.
>>> A.foo = lambda self: "My eyes, aww my eyes! {}".format(self.name)
>>> Stick.foo()
'My eyes, aww my eyes! Dave'
>>> WeakFoo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in __call__
ReferenceError: weakly-referenced object no longer exists
>>>
If you were going to be replacing methods on-the-fly you might need to use a getattr(weakref.proxy(object), 'name_of_attribute_as_string') approach instead. getattr is a fairly fast look-up so that isn't the literal worst thing in the world, but depending on what you're doing, YMMV.
I have a subclass and I want it to not include a class attribute that's present on the base class.
I tried this, but it doesn't work:
>>> class A(object):
... x = 5
>>> class B(A):
... del x
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
class B(A):
File "<pyshell#1>", line 2, in B
del x
NameError: name 'x' is not defined
How can I do this?
You can use delattr(class, field_name) to remove it from the class definition.
You don't need to delete it. Just override it.
class B(A):
x = None
or simply don't reference it.
Or consider a different design (instance attribute?).
None of the answers had worked for me.
For example delattr(SubClass, "attrname") (or its exact equivalent, del SubClass.attrname) won't "hide" a parent method, because this is not how method resolution work. It would fail with AttributeError('attrname',) instead, as the subclass doesn't have attrname. And, of course, replacing attribute with None doesn't actually remove it.
Let's consider this base class:
class Spam(object):
# Also try with `expect = True` and with a `#property` decorator
def expect(self):
return "This is pretty much expected"
I know only two only ways to subclass it, hiding the expect attribute:
Using a descriptor class that raises AttributeError from __get__. On attribute lookup, there will be an exception, generally indistinguishable from a lookup failure.
The simplest way is just declaring a property that raises AttributeError. This is essentially what #JBernardo had suggested.
class SpanishInquisition(Spam):
#property
def expect(self):
raise AttributeError("Nobody expects the Spanish Inquisition!")
assert hasattr(Spam, "expect") == True
# assert hasattr(SpanishInquisition, "expect") == False # Fails!
assert hasattr(SpanishInquisition(), "expect") == False
However, this only works for instances, and not for the classes (the hasattr(SpanishInquisition, "expect") == True assertion would be broken).
If you want all the assertions above to hold true, use this:
class AttributeHider(object):
def __get__(self, instance, owner):
raise AttributeError("This is not the attribute you're looking for")
class SpanishInquisition(Spam):
expect = AttributeHider()
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False # Works!
assert hasattr(SpanishInquisition(), "expect") == False
I believe this is the most elegant method, as the code is clear, generic and compact. Of course, one should really think twice if removing the attribute is what they really want.
Overriding attribute lookup with __getattribute__ magic method. You can do this either in a subclass (or a mixin, like in the example below, as I wanted to write it just once), and that would hide attribute on the subclass instances. If you want to hide the method from the subclass as well, you need to use metaclasses.
class ExpectMethodHider(object):
def __getattribute__(self, name):
if name == "expect":
raise AttributeError("Nobody expects the Spanish Inquisition!")
return super().__getattribute__(name)
class ExpectMethodHidingMetaclass(ExpectMethodHider, type):
pass
# I've used Python 3.x here, thus the syntax.
# For Python 2.x use __metaclass__ = ExpectMethodHidingMetaclass
class SpanishInquisition(ExpectMethodHider, Spam,
metaclass=ExpectMethodHidingMetaclass):
pass
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False
assert hasattr(SpanishInquisition(), "expect") == False
This looks worse (more verbose and less generic) than the method above, but one may consider this approach as well.
Note, this does not work on special ("magic") methods (e.g. __len__), because those bypass __getproperty__. Check out Special Method Lookup section of the Python documentation for more details. If this is what you need to undo, just override it and call object's implementation, skipping the parent.
Needless to say, this only applies to the "new-style classes" (the ones that inherit from object), as magic methods and descriptor protocols aren't supported there. Hopefully, those are a thing of the past.
Maybe you could set x as property and raise AttributeError whenever someone try to access it.
>>> class C:
x = 5
>>> class D(C):
def foo(self):
raise AttributeError
x = property(foo)
>>> d = D()
>>> print(d.x)
File "<pyshell#17>", line 3, in foo
raise AttributeError
AttributeError
Think carefully about why you want to do this; you probably don't. Consider not making B inherit from A.
The idea of subclassing is to specialise an object. In particular, children of a class should be valid instances of the parent class:
>>> class foo(dict): pass
>>> isinstance(foo(), dict)
... True
If you implement this behaviour (with e.g. x = property(lambda: AttributeError)), you are breaking the subclassing concept, and this is Bad.
I'm had the same problem as well, and I thought I had a valid reason to delete the class attribute in the subclass: my superclass (call it A) had a read-only property that provided the value of the attribute, but in my subclass (call it B), the attribute was a read/write instance variable. I found that Python was calling the property function even though I thought the instance variable should have been overriding it. I could have made a separate getter function to be used to access the underlying property, but that seemed like an unnecessary and inelegant cluttering of the interface namespace (as if that really matters).
As it turns out, the answer was to create a new abstract superclass (call it S) with the original common attributes of A, and have A and B derive from S. Since Python has duck typing, it does not really matter that B does not extend A, I can still use them in the same places, since they implicitly implement the same interface.
Trying to do this is probably a bad idea, but...
It doesn't seem to be do this via "proper" inheritance because of how looking up B.x works by default. When getting B.x the x is first looked up in B and if it's not found there it's searched in A, but on the other hand when setting or deleting B.x only B will be searched. So for example
>>> class A:
>>> x = 5
>>> class B(A):
>>> pass
>>> B.x
5
>>> del B.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: class B has no attribute 'x'
>>> B.x = 6
>>> B.x
6
>>> del B.x
>>> B.x
5
Here we see that first we doesn't seem to be able to delete B.x since it doesn't exist (A.x exists and is what gets served when you evaluate B.x). However by setting B.x to 6 the B.x will exist, it can be retrieved by B.x and deleted by del B.x by which it ceases to exist so after that again A.x will be served as response to B.x.
What you could do on the other hand is to use metaclasses to make B.x raise AttributeError:
class NoX(type):
#property
def x(self):
raise AttributeError("We don't like X")
class A(object):
x = [42]
class B(A, metaclass=NoX):
pass
print(A.x)
print(B.x)
Now of course purists may yell that this breaks the LSP, but it's not that simple. It all boils down to if you consider that you've created a subtype by doing this. The issubclass and isinstance methods says yes, but LSP says no (and many programmers would assume "yes" since you inherit from A).
The LSP means that if B is a subtype of A then we could use B whenever we could use A, but since we can't do this while doing this construct we could conclude that B actually isn't a subtype of A and therefore LSP isn't violated.
Is it possible to make a decorator that makes attributes lazy that do not eval when you try to access it with hasattr()? I worked out how to make it lazy, but hasattr() makes it evaluate prematurely. E.g.,
class lazyattribute:
# Magic.
class A:
#lazyattribute
def bar(self):
print("Computing")
return 5
>>> a = A()
>>> print(a.bar)
'Computing'
5
>>> print(a.bar)
5
>>> b = A()
>>> hasattr(b, 'bar')
'Computing'
5
# Wanted output: 5
It may be difficult to do. From the hasattr documentation:
hasattr(object, name)
The arguments are an object and a string. The result is True if the string is the name of one of the object’s attributes, False if not. (This is implemented by calling getattr(object, name) and seeing whether it raises an exception or not.)
Since attributes may be generated dynamically by a __getattr__ method, there's no other way to reliably check for their presence. For your special situation, maybe testing the dictionaries explicitly would be enough:
any('bar' in d for d in (b.__dict__, b.__class__.__dict__))
What nobody seems to have addressed so far is that perhaps the best thing to do is not to use hasattr(). Instead, go for EAFP (Easier to Ask Forgiveness than Permission).
try:
x = foo.bar
except AttributeError:
# what went in your else-block
...
else:
# what went in your if hasattr(foo, "bar") block
...
This is obviously not a drop-in replacement, and you might have to move stuff around a bit, but it's possibly the "nicest" solution (subjectively, of course).
The problem is that hasattr uses getattr so your attribute is always going to be evaluated when you use hasattr. If you post the code for your lazyattribute magic hopefully someone can suggest an alternative way of testing the presence of the attribute which doesn't require hasattr or getattr. See the help for hasattr:
>>> help(hasattr)
Help on built-in function hasattr in module __builtin__:
hasattr(...)
hasattr(object, name) -> bool
Return whether the object has an attribute with the given name.
(This is done by calling getattr(object, name) and catching exceptions.)
I'm curious why you need something like this. If hasattr ends up calling your "compute function", then so be it. Just how lazy does your property need to be anyway?
Still, here's a rather inelegeant way of doing it by examining the calling function's name. It could probably be coded a little better, but I don't think it should ever be used seriously.
import inspect
class lazyattribute(object):
def __init__(self, func):
self.func = func
def __get__(self, obj, kls=None):
if obj is None or inspect.stack()[1][4][0].startswith('hasattr'):
return None
value = self.func(obj)
setattr(obj, self.func.__name__, value)
return value
class Foo(object):
#lazyattribute
def bar(self):
return 42
orip's answer will be enough only if your object's inheritance has one level of depth.
You should iterate over the method resolution order of object's class to have a complete solution:
from itertools import chain
def lazy_hasattr(obj, name):
# checks for the attribute without triggering __get__
return any(name in d for d in chain((obj.__dict__,),
(c.__dict__ for c in obj.__class__.mro())))
# Usage:
lazy_hasattr(b,'bar')
well the fix is a little hackish but it consists of the following
rename hasattr (say as _hasattr)
rebind hasattr as the following:
def hasattr(obj, name):
try:
return obj._hasattr(name) or _hasattr(obj, name)
except:
return _hasattr(obj, name)
implement class method _hasattr by checking some data structure (ie. array) which is populated with all the lazy attribute names (for an array you'd say: name in lazyAttrArray)
finally somehow have your #lazyattribute decorator add items into some sort of structure (like the array we mentioned above), and then when you call _hasattr then you look in that structure
this is the step that I'm not quite sure how you'd implement 'cause I haven't worked with creating my own decorators