Related
How do I check if an object has some attribute? For example:
>>> a = SomeClass()
>>> a.property
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: SomeClass instance has no attribute 'property'
How do I tell if a has the attribute property before using it?
Try hasattr():
if hasattr(a, 'property'):
a.property
See zweiterlinde's answer below, who offers good advice about asking forgiveness! A very pythonic approach!
The general practice in python is that, if the property is likely to be there most of the time, simply call it and either let the exception propagate, or trap it with a try/except block. This will likely be faster than hasattr. If the property is likely to not be there most of the time, or you're not sure, using hasattr will probably be faster than repeatedly falling into an exception block.
As Jarret Hardie answered, hasattr will do the trick. I would like to add, though, that many in the Python community recommend a strategy of "easier to ask for forgiveness than permission" (EAFP) rather than "look before you leap" (LBYL). See these references:
EAFP vs LBYL (was Re: A little disappointed so far)
EAFP vs. LBYL #Code Like a Pythonista: Idiomatic Python
ie:
try:
doStuff(a.property)
except AttributeError:
otherStuff()
... is preferred to:
if hasattr(a, 'property'):
doStuff(a.property)
else:
otherStuff()
You can use hasattr() or catch AttributeError, but if you really just want the value of the attribute with a default if it isn't there, the best option is just to use getattr():
getattr(a, 'property', 'default value')
I think what you are looking for is hasattr. However, I'd recommend something like this if you want to detect python properties-
try:
getattr(someObject, 'someProperty')
except AttributeError:
print "Doesn't exist"
else
print "Exists"
The disadvantage here is that attribute errors in the properties __get__ code are also caught.
Otherwise, do-
if hasattr(someObject, 'someProp'):
#Access someProp/ set someProp
pass
Docs:http://docs.python.org/library/functions.html
Warning:
The reason for my recommendation is that hasattr doesn't detect properties.
Link:http://mail.python.org/pipermail/python-dev/2005-December/058498.html
According to pydoc, hasattr(obj, prop) simply calls getattr(obj, prop) and catches exceptions. So, it is just as valid to wrap the attribute access with a try statement and catch AttributeError as it is to use hasattr() beforehand.
a = SomeClass()
try:
return a.fake_prop
except AttributeError:
return default_value
I would like to suggest avoid this:
try:
doStuff(a.property)
except AttributeError:
otherStuff()
The user #jpalecek mentioned it: If an AttributeError occurs inside doStuff(), you are lost.
Maybe this approach is better:
try:
val = a.property
except AttributeError:
otherStuff()
else:
doStuff(val)
For objects other than dictonary:
if hasattr(a, 'property'):
a.property
For dictionary, hasattr() will not work.
Many people are telling to use has_key() for dictionary, but it is depreciated.
So for dictionary, you have to use has_attr()
if a.has_attr('property'):
a['property']
Or you can also use
if 'property' in a:
hasattr() is the right answer. What I want to add is that hasattr() can be used well in conjunction with assert (to avoid unnecessary if statements and make the code more readable):
assert hasattr(a, 'property'), 'object lacks property'
print(a.property)
In case that the property is missing, the program will exit with an AssertionError and printing out the provided error message (object lacks property in this case).
As stated in another answer on SO:
Asserts should be used to test conditions that should never happen.
The purpose is to crash early in the case of a corrupt program state.
Often this is the case when a property is missing and then assert is very appropriate.
EDIT:This approach has serious limitation. It should work if the object is an iterable one. Please check the comments below.
If you are using Python 3.6 or higher like me there is a convenient alternative to check whether an object has a particular attribute:
if 'attr1' in obj1:
print("attr1 = {}".format(obj1["attr1"]))
However, I'm not sure which is the best approach right now. using hasattr(), using getattr() or using in. Comments are welcome.
Hope you expecting hasattr(), but try to avoid hasattr() and please prefer getattr(). getattr() is faster than hasattr()
using hasattr():
if hasattr(a, 'property'):
print a.property
same here i am using getattr to get property if there is no property it return none
property = getattr(a,"property",None)
if property:
print property
Depending on the situation you can check with isinstance what kind of object you have, and then use the corresponding attributes. With the introduction of abstract base classes in Python 2.6/3.0 this approach has also become much more powerful (basically ABCs allow for a more sophisticated way of duck typing).
One situation were this is useful would be if two different objects have an attribute with the same name, but with different meaning. Using only hasattr might then lead to strange errors.
One nice example is the distinction between iterators and iterables (see this question). The __iter__ methods in an iterator and an iterable have the same name but are semantically quite different! So hasattr is useless, but isinstance together with ABC's provides a clean solution.
However, I agree that in most situations the hasattr approach (described in other answers) is the most appropriate solution.
Here's a very intuitive approach :
if 'property' in dir(a):
a.property
If a is a dictionary, you can check normally
if 'property' in a:
a.property
This is super simple, just use dir(object)
This will return a list of every available function and attribute of the object.
You can check whether object contains attribute by using hasattr builtin method.
For an instance if your object is a and you want to check for attribute stuff
>>> class a:
... stuff = "something"
...
>>> hasattr(a,'stuff')
True
>>> hasattr(a,'other_stuff')
False
The method signature itself is hasattr(object, name) -> bool which mean if object has attribute which is passed to second argument in hasattr than it gives boolean True or False according to the presence of name attribute in object.
You can use hasattr() to check if object or class has an attribute in Python.
For example, there is Person class as shown below:
class Person:
greeting = "Hello"
def __init__(self, name, age):
self.name = name
self.age = age
def test(self):
print("Test")
Then, you can use hasattr() for object as shown below:
obj = Person("John", 27)
obj.gender = "Male"
print("greeting:", hasattr(obj, 'greeting'))
print("name:", hasattr(obj, 'name'))
print("age:", hasattr(obj, 'age'))
print("gender:", hasattr(obj, 'gender'))
print("test:", hasattr(obj, 'test'))
print("__init__:", hasattr(obj, '__init__'))
print("__str__:", hasattr(obj, '__str__'))
print("__module__:", hasattr(obj, '__module__'))
Output:
greeting: True
name: True
age: True
gender: True
test: True
__init__: True
__str__: True
__module__: True
And, you can also use hasattr() directly for class name as shown below:
print("greeting:", hasattr(Person, 'greeting'))
print("name:", hasattr(Person, 'name'))
print("age:", hasattr(Person, 'age'))
print("gender:", hasattr(Person, 'gender'))
print("test:", hasattr(Person, 'test'))
print("__init__:", hasattr(Person, '__init__'))
print("__str__:", hasattr(Person, '__str__'))
print("__module__:", hasattr(Person, '__module__'))
Output:
greeting: True
name: False
age: False
gender: False
test: True
__init__: True
__str__: True
__module__: True
Another possible option, but it depends if what you mean by before:
undefined = object()
class Widget:
def __init__(self):
self.bar = 1
def zoom(self):
print("zoom!")
a = Widget()
bar = getattr(a, "bar", undefined)
if bar is not undefined:
print("bar:%s" % (bar))
foo = getattr(a, "foo", undefined)
if foo is not undefined:
print("foo:%s" % (foo))
zoom = getattr(a, "zoom", undefined)
if zoom is not undefined:
zoom()
output:
bar:1
zoom!
This allows you to even check for None-valued attributes.
But! Be very careful you don't accidentally instantiate and compare undefined multiple places because the is will never work in that case.
Update:
because of what I was warning about in the above paragraph, having multiple undefineds that never match, I have recently slightly modified this pattern:
undefined = NotImplemented
NotImplemented, not to be confused with NotImplementedError, is a built-in: it semi-matches the intent of a JS undefined and you can reuse its definition everywhere and it will always match. The drawbacks is that it is "truthy" in booleans and it can look weird in logs and stack traces (but you quickly get over it when you know it only appears in this context).
Does python have anything similar to a sealed class? I believe it's also known as final class, in java.
In other words, in python, can we mark a class so it can never be inherited or expanded upon? Did python ever considered having such a feature? Why?
Disclaimers
Actually trying to understand why sealed classes even exist. Answer here (and in many, many, many, many, many, really many other places) did not satisfy me at all, so I'm trying to look from a different angle. Please, avoid theoretical answers to this question, and focus on the title! Or, if you'd insist, at least please give one very good and practical example of a sealed class in csharp, pointing what would break big time if it was unsealed.
I'm no expert in either language, but I do know a bit of both. Just yesterday while coding on csharp I got to know about the existence of sealed classes. And now I'm wondering if python has anything equivalent to that. I believe there is a very good reason for its existence, but I'm really not getting it.
You can use a metaclass to prevent subclassing:
class Final(type):
def __new__(cls, name, bases, classdict):
for b in bases:
if isinstance(b, Final):
raise TypeError("type '{0}' is not an acceptable base type".format(b.__name__))
return type.__new__(cls, name, bases, dict(classdict))
class Foo:
__metaclass__ = Final
class Bar(Foo):
pass
gives:
>>> class Bar(Foo):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __new__
TypeError: type 'Foo' is not an acceptable base type
The __metaclass__ = Final line makes the Foo class 'sealed'.
Note that you'd use a sealed class in .NET as a performance measure; since there won't be any subclassing methods can be addressed directly. Python method lookups work very differently, and there is no advantage or disadvantage, when it comes to method lookups, to using a metaclass like the above example.
Before we talk Python, let's talk "sealed":
I, too, have heard that the advantage of .Net sealed / Java final / C++ entirely-nonvirtual classes is performance. I heard it from a .Net dev at Microsoft, so maybe it's true. If you're building a heavy-use, highly-performance-sensitive app or framework, you may want to seal a handful of classes at or near the real, profiled bottleneck. Particularly classes that you are using within your own code.
For most applications of software, sealing a class that other teams consume as part of a framework/library/API is kinda...weird.
Mostly because there's a simple work-around for any sealed class, anyway.
I teach "Essential Test-Driven Development" courses, and in those three languages, I suggest consumers of such a sealed class wrap it in a delegating proxy that has the exact same method signatures, but they're override-able (virtual), so devs can create test-doubles for these slow, nondeterministic, or side-effect-inducing external dependencies.
[Warning: below snark intended as humor. Please read with your sense of humor subroutines activated. I do realize that there are cases where sealed/final are necessary.]
The proxy (which is not test code) effectively unseals (re-virtualizes) the class, resulting in v-table look-ups and possibly less efficient code (unless the compiler optimizer is competent enough to in-line the delegation). The advantages are that you can test your own code efficiently, saving living, breathing humans weeks of debugging time (in contrast to saving your app a few million microseconds) per month... [Disclaimer: that's just a WAG. Yeah, I know, your app is special. ;-]
So, my recommendations: (1) trust your compiler's optimizer, (2) stop creating unnecessary sealed/final/non-virtual classes that you built in order to either (a) eke out every microsecond of performance at a place that is likely not your bottleneck anyway (the keyboard, the Internet...), or (b) create some sort of misguided compile-time constraint on the "junior developers" on your team (yeah...I've seen that, too).
Oh, and (3) write the test first. ;-)
Okay, yes, there's always link-time mocking, too (e.g. TypeMock). You got me. Go ahead, seal your class. Whatevs.
Back to Python: The fact that there's a hack rather than a keyword is probably a reflection of the pure-virtual nature of Python. It's just not "natural."
By the way, I came to this question because I had the exact same question. Working on the Python port of my ever-so-challenging and realistic legacy-code lab, and I wanted to know if Python had such an abominable keyword as sealed or final (I use them in the Java, C#, and C++ courses as a challenge to unit testing). Apparently it doesn't. Now I have to find something equally challenging about untested Python code. Hmmm...
Python does have classes that can't be extended, such as bool or NoneType:
>>> class ExtendedBool(bool):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: type 'bool' is not an acceptable base type
However, such classes cannot be created from Python code. (In the CPython C API, they are created by not setting the Py_TPFLAGS_BASETYPE flag.)
Python 3.6 will introduce the __init_subclass__ special method; raising an error from it will prevent creating subclasses. For older versions, a metaclass can be used.
Still, the most “Pythonic” way to limit usage of a class is to document how it should not be used.
Similar in purpose to a sealed class and useful to reduce memory usage (Usage of __slots__?) is the __slots__ attribute which prevents monkey patching a class. Because when the metaclass __new__ is called, it is too late to put a __slots__ into the class, we have to put it into the namespace at the first possible timepoint, i.e. during __prepare__. Additionally, this throws the TypeError a little bit earlier. Using mcs for the isinstance comparison removes the necessity to hardcode the metaclass name in itself. The disadvantage is that all unslotted attributes are read-only. Therefore, if we want to set specific attributes during initialization or later, they have to slotted specifically. This is feasible e.g. by using a dynamic metaclass taking slots as an argument.
def Final(slots=[]):
if "__dict__" in slots:
raise ValueError("Having __dict__ in __slots__ breaks the purpose")
class _Final(type):
#classmethod
def __prepare__(mcs, name, bases, **kwargs):
for b in bases:
if isinstance(b, mcs):
msg = "type '{0}' is not an acceptable base type"
raise TypeError(msg.format(b.__name__))
namespace = {"__slots__":slots}
return namespace
return _Final
class Foo(metaclass=Final(slots=["_z"])):
y = 1
def __init__(self, z=1):
self.z = 1
#property
def z(self):
return self._z
#z.setter
def z(self, val:int):
if not isinstance(val, int):
raise TypeError("Value must be an integer")
else:
self._z = val
def foo(self):
print("I am sealed against monkey patching")
where the attempt of overwriting foo.foo will throw AttributeError: 'Foo' object attribute 'foo' is read-only and attempting to add foo.x will throw AttributeError: 'Foo' object has no attribute 'x'. The limiting power of __slots__ would be broken when inheriting, but because Foo has the metaclass Final, you can't inherit from it. It would also be broken when dict is in slots, so we throw a ValueError in case. To conclude, defining setters and getters for slotted properties allows to limit how the user can overwrite them.
foo = Foo()
# attributes are accessible
foo.foo()
print(foo.y)
# changing slotted attributes is possible
foo.z = 2
# %%
# overwriting unslotted attributes won't work
foo.foo = lambda:print("Guerilla patching attempt")
# overwriting a accordingly defined property won't work
foo.z = foo.foo
# expanding won't work
foo.x = 1
# %% inheriting won't work
class Bar(Foo):
pass
In that regard, Foo could not be inherited or expanded upon. The disadvantage is that all attributes have to be explicitly slotted, or are limited to a read-only class variable.
Python 3.8 has that feature in the form of the typing.final decorator:
class Base:
#final
def done(self) -> None:
...
class Sub(Base):
def done(self) -> None: # Error reported by type checker
...
#final
class Leaf:
...
class Other(Leaf): # Error reported by type checker
See https://docs.python.org/3/library/typing.html#typing.final
Although the title can be interpreted as three questions, the actual problem is simple to describe. On a Linux system I have python 2.7.3 installed, and want to be warned about python 3 incompatibilities. Therefore, my code snippet (tester.py) looks like:
#!/usr/bin/python -3
class MyClass(object):
def __eq__(self, other):
return False
When I execute this code snippet (thought to be only to show the problem, not an actual piece of code I am using in my project) as
./tester.py
I get the following deprecation warning:
./tester.py:3: DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x
class MyClass(object):
My question: How do I change this code snippet to get rid of the warning, i.e. to make it compatible to version 3? I want to implement the equality operator in the correct way, not just suppressing the warning or something similar.
From the documentation page for Python 3.4:
If a class does not define an __eq__() method it should not define a __hash__() operation either; if it defines __eq__() but not __hash__(), its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an __eq__() method, it should not implement __hash__(), since the implementation of hashable collections requires that a key’s hash value is immutable (if the object’s hash value changes, it will be in the wrong hash bucket).
Basically, you need to define a __hash()__ function.
The problem is that for user-defined classes, the __eq()__ and __hash()__ functions are automatically defined.
x.__hash__() returns an appropriate value such that x == y implies
both that x is y and hash(x) == hash(y).
If you define just the __eq()__, then __hash()__ is set to return None. So you will hit the wall.
The simpler way out if you don't want to bother about implementing the __hash()__ and you know for certain that your object will never be hashed, you just explicitly declare __hash__ = None which takes care of the warning.
Alex: python's -3 option is warning you about a potential problem; it doesn't know that you aren't using instances of MyClass in sets or as keys in mappings, so it warns that something that you might have been relying on wouldn't work, if you were. If you aren't using MyClass that way, just ignore the warning. It's a dumb tool to help you catch potential problems; in the end, you're expected to be the one with the actual intelligence to work out which warnings actually matter.
If you really care about suppressing the warning - or, indeed, if a class is mutable and you want to make sure it isn't used in sets or as the key in any mapping - the simple assignment __hash__ = None (as Sudipta pointed out) in the class body shall do that for you. Since None isn't callable, this makes instances non-hashable.
class MyClass (object):
def __eq__(self, other): return self is other
__hash__ = None
In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object’s metaclass.
The docs mention special methods such as __hash__, __repr__ and __len__, and I know from experience it also includes __iter__ for Python 2.7.
To quote an answer to a related question:
"Magic __methods__() are treated specially: They are internally assigned to "slots" in the type data structure to speed up their look-up, and they are only looked up in these slots."
In a quest to improve my answer to another question, I need to know: Which methods, specifically, are we talking about?
You can find an answer in the python3 documentation for object.__getattribute__, which states:
Called unconditionally to implement attribute accesses for instances of the class. If the class also defines __getattr__(), the
latter will not be called unless __getattribute__() either calls it
explicitly or raises an AttributeError. This method should return the
(computed) attribute value or raise an AttributeError exception. In
order to avoid infinite recursion in this method, its implementation
should always call the base class method with the same name to access
any attributes it needs, for example, object.__getattribute__(self,
name).
Note
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in
functions. See Special method lookup.
also this page explains exactly how this "machinery" works. Fundamentally __getattribute__ is called only when you access an attribute with the .(dot) operator(and also by hasattr as Zagorulkin pointed out).
Note that the page does not specify which special methods are implicitly looked up, so I deem that this hold for all of them(which you may find here.
Checked in 2.7.9
Couldn't find any way to bypass the call to __getattribute__, with any of the magical methods that are found on object or type:
# Preparation step: did this from the console
# magics = set(dir(object) + dir(type))
# got 38 names, for each of the names, wrote a.<that_name> to a file
# Ended up with this:
a.__module__
a.__base__
#...
Put this at the beginning of that file, which i renamed into a proper python module (asdf.py)
global_counter = 0
class Counter(object):
def __getattribute__(self, name):
# this will count how many times the method was called
global global_counter
global_counter += 1
return super(Counter, self).__getattribute__(name)
a = Counter()
# after this comes the list of 38 attribute accessess
a.__module__
#...
a.__repr__
#...
print global_counter # you're not gonna like it... it printer 38
Then i also tried to get each of those names by getattr and hasattr -> same result. __getattribute__ was called every time.
So if anyone has other ideas... I was too lazy to look inside C code for this, but I'm sure the answer lies somewhere there.
So either there's something that i'm not getting right, or the docs are lying.
super().method will also bypass __getattribute__. This atrocious code will run just fine (Python 3.11).
class Base:
def print(self):
print("whatever")
def __getattribute__(self, item):
raise Exception("Don't access this with a dot!")
class Sub(Base):
def __init__(self):
super().print()
a = Sub()
# prints 'whatever'
a.print()
# Exception Don't access this with a dot!
Is it possible to make a decorator that makes attributes lazy that do not eval when you try to access it with hasattr()? I worked out how to make it lazy, but hasattr() makes it evaluate prematurely. E.g.,
class lazyattribute:
# Magic.
class A:
#lazyattribute
def bar(self):
print("Computing")
return 5
>>> a = A()
>>> print(a.bar)
'Computing'
5
>>> print(a.bar)
5
>>> b = A()
>>> hasattr(b, 'bar')
'Computing'
5
# Wanted output: 5
It may be difficult to do. From the hasattr documentation:
hasattr(object, name)
The arguments are an object and a string. The result is True if the string is the name of one of the object’s attributes, False if not. (This is implemented by calling getattr(object, name) and seeing whether it raises an exception or not.)
Since attributes may be generated dynamically by a __getattr__ method, there's no other way to reliably check for their presence. For your special situation, maybe testing the dictionaries explicitly would be enough:
any('bar' in d for d in (b.__dict__, b.__class__.__dict__))
What nobody seems to have addressed so far is that perhaps the best thing to do is not to use hasattr(). Instead, go for EAFP (Easier to Ask Forgiveness than Permission).
try:
x = foo.bar
except AttributeError:
# what went in your else-block
...
else:
# what went in your if hasattr(foo, "bar") block
...
This is obviously not a drop-in replacement, and you might have to move stuff around a bit, but it's possibly the "nicest" solution (subjectively, of course).
The problem is that hasattr uses getattr so your attribute is always going to be evaluated when you use hasattr. If you post the code for your lazyattribute magic hopefully someone can suggest an alternative way of testing the presence of the attribute which doesn't require hasattr or getattr. See the help for hasattr:
>>> help(hasattr)
Help on built-in function hasattr in module __builtin__:
hasattr(...)
hasattr(object, name) -> bool
Return whether the object has an attribute with the given name.
(This is done by calling getattr(object, name) and catching exceptions.)
I'm curious why you need something like this. If hasattr ends up calling your "compute function", then so be it. Just how lazy does your property need to be anyway?
Still, here's a rather inelegeant way of doing it by examining the calling function's name. It could probably be coded a little better, but I don't think it should ever be used seriously.
import inspect
class lazyattribute(object):
def __init__(self, func):
self.func = func
def __get__(self, obj, kls=None):
if obj is None or inspect.stack()[1][4][0].startswith('hasattr'):
return None
value = self.func(obj)
setattr(obj, self.func.__name__, value)
return value
class Foo(object):
#lazyattribute
def bar(self):
return 42
orip's answer will be enough only if your object's inheritance has one level of depth.
You should iterate over the method resolution order of object's class to have a complete solution:
from itertools import chain
def lazy_hasattr(obj, name):
# checks for the attribute without triggering __get__
return any(name in d for d in chain((obj.__dict__,),
(c.__dict__ for c in obj.__class__.mro())))
# Usage:
lazy_hasattr(b,'bar')
well the fix is a little hackish but it consists of the following
rename hasattr (say as _hasattr)
rebind hasattr as the following:
def hasattr(obj, name):
try:
return obj._hasattr(name) or _hasattr(obj, name)
except:
return _hasattr(obj, name)
implement class method _hasattr by checking some data structure (ie. array) which is populated with all the lazy attribute names (for an array you'd say: name in lazyAttrArray)
finally somehow have your #lazyattribute decorator add items into some sort of structure (like the array we mentioned above), and then when you call _hasattr then you look in that structure
this is the step that I'm not quite sure how you'd implement 'cause I haven't worked with creating my own decorators