I just spent too long on a bug like the following:
>>> class Odp():
def __init__(self):
self.foo = "bar"
>>> o = Odp()
>>> o.raw_foo = 3 # oops - meant o.foo
I have a class with an attribute. I was trying to set it, and wondering why it had no effect. Then, I went back to the original class definition, and saw that the attribute was named something slightly different. Thus, I was creating/setting a new attribute instead of the one meant to.
First off, isn't this exactly the type of error that statically-typed languages are supposed to prevent? In this case, what is the advantage of dynamic typing?
Secondly, is there a way I could have forbidden this when defining Odp, and thus saved myself the trouble?
You can implement a __setattr__ method for the purpose -- that's much more robust than the __slots__ which is often misused for the purpose (for example, __slots__ is automatically "lost" when the class is inherited from, while __setattr__ survives unless explicitly overridden).
def __setattr__(self, name, value):
if hasattr(self, name):
object.__setattr__(self, name, value)
else:
raise TypeError('Cannot set name %r on object of type %s' % (
name, self.__class__.__name__))
You'll have to make sure the hasattr succeeds for the names you do want to be able to set, for example by setting the attributes at a class level or by using object.__setattr__ in your __init__ method rather than direct attribute assignment. (To forbid setting attributes on a class rather than its instances you'll have to define a custom metaclass with a similar special method).
Related
'Every thing in python is an object'
So, should all objects have to have attributes and methods ?
I read below statemts in tutorial site, could you give example of pre-defined object in python that has neither attributes nor methods ?
Some objects have neither attributes nor methods
Everything in Python is indeed an object, this is true. Even classes themselves are considered to be objects and they're indeed the product of the builtin class typewhich is not surprisingly an object too. But objects will almost certainly inherit attributes including methods or data in most circumstances.
So, should all objects have to have attributes and methods ?
Not necessarily all objects have their own attributes. For instance, an object can inherit the attribute from its class and its class's superclasses, but that attribute or method doesn't necessarily live within the instance's namespace dictionary. In fact, instances' namespaces could be empty like the following:
class Foo:
pass
a = A()
print(a.__dict__)
a here doesn't have any attributes aside from those inherited from its class so if you check its namespace through the builtin attribute __dict__ you'll find the namespace to be an empty dictionary. But you might wonder isn't a.__dict__ an attribute of a? Make a distinction between class-level attributes--attributes inherited from the class or its superclasses and instance-attributes--attributes that belong to the instance and usually live in its namespace __dict__.
Could you give example of pre-defined object in python that has neither attributes nor methods ?
If you meant by predefined object, a builtin object, I couldn't imagine such scenario. Again, even if there are no attributes at the object itself, there would be attributes inherited from its class or the class's superclasses if there's any superclass in most cases. Probably and I'm guessing here, the tutorial is asking you to create class that assigns no attributes to its objects, just like the code I included above.
And this already answers your question better: Is everything an object in python like ruby?
There's a hackish way to emulate a Python object with no attributes.
class NoAttr(object):
def __getattribute__(self, attr):
raise AttributeError("no attribute: %s" % attr)
def __setattr__(self, attr, value):
raise AttributeError("can't set attribute: %s" % attr)
def __delattr__(self, attr):
raise AttributeError("no attribute: %s" % attr)
a = NoAttr()
This instance a, for all intents and purposes, in pure Python, behaves like an object with no attributes (you can try hasattr on it).
There may be a low-level way to do this in a C extension by implementing a type in C that pathologically stops Python's object implementation from working. Anyway the margin here is too small for writing one.
A pre-defined object with no attributes would defeat the purpose of pre-defining it.
As of Python 3.4, there is a descriptor called DynamicClassAttribute. The documentation states:
types.DynamicClassAttribute(fget=None, fset=None, fdel=None, doc=None)
Route attribute access on a class to __getattr__.
This is a descriptor, used to define attributes that act differently when accessed through an instance and through a class. Instance access remains normal, but access to an attribute through a class will be routed to the class’s __getattr__ method; this is done by raising AttributeError.
This allows one to have properties active on an instance, and have virtual attributes on the class with the same name (see Enum for an example).
New in version 3.4.
It is apparently used in the enum module:
# DynamicClassAttribute is used to provide access to the `name` and
# `value` properties of enum members while keeping some measure of
# protection from modification, while still allowing for an enumeration
# to have members named `name` and `value`. This works because enumeration
# members are not set directly on the enum class -- __getattr__ is
# used to look them up.
#DynamicClassAttribute
def name(self):
"""The name of the Enum member."""
return self._name_
#DynamicClassAttribute
def value(self):
"""The value of the Enum member."""
return self._value_
I realise that enums are a little special, but I don't understand how this relates to the DynamicClassAttribute. What does it mean that those attributes are dynamic, how is this different from a normal property, and how do I use a DynamicClassAttribute to my advantage?
New Version:
I was a bit disappointed with the previous answer so I decided to rewrite it a bit:
First have a look at the source code of DynamicClassAttribute and you'll probably notice, that it looks very much like the normal property. Except for the __get__-method:
def __get__(self, instance, ownerclass=None):
if instance is None:
# Here is the difference, the normal property just does: return self
if self.__isabstractmethod__:
return self
raise AttributeError()
elif self.fget is None:
raise AttributeError("unreadable attribute")
return self.fget(instance)
So what this means is that if you want to access a DynamicClassAttribute (that isn't abstract) on the class it raises an AttributeError instead of returning self. For instances if instance: evaluates to True and the __get__ is identical to property.__get__.
For normal classes that just resolves in a visible AttributeError when calling the attribute:
from types import DynamicClassAttribute
class Fun():
#DynamicClassAttribute
def has_fun(self):
return False
Fun.has_fun
AttributeError - Traceback (most recent call last)
that for itself is not very helpful until you take a look at the "Class attribute lookup" procedure when using metaclasses (I found a nice image of this in this blog).
Because in case that an attribute raises an AttributeError and that class has a metaclass python looks at the metaclass.__getattr__ method and sees if that can resolve the attribute. To illustrate this with a minimal example:
from types import DynamicClassAttribute
# Metaclass
class Funny(type):
def __getattr__(self, value):
print('search in meta')
# Normally you would implement here some ifs/elifs or a lookup in a dictionary
# but I'll just return the attribute
return Funny.dynprop
# Metaclasses dynprop:
dynprop = 'Meta'
class Fun(metaclass=Funny):
def __init__(self, value):
self._dynprop = value
#DynamicClassAttribute
def dynprop(self):
return self._dynprop
And here comes the "dynamic" part. If you call the dynprop on the class it will search in the meta and return the meta's dynprop:
Fun.dynprop
which prints:
search in meta
'Meta'
So we invoked the metaclass.__getattr__ and returned the original attribute (which was defined with the same name as the new property).
While for instances the dynprop of the Fun-instance is returned:
Fun('Not-Meta').dynprop
we get the overriden attribute:
'Not-Meta'
My conclusion from this is, that DynamicClassAttribute is important if you want to allow subclasses to have an attribute with the same name as used in the metaclass. You'll shadow it on instances but it's still accessible if you call it on the class.
I did go into the behaviour of Enum in the old version so I left it in here:
Old Version
The DynamicClassAttribute is just useful (I'm not really sure on that point) if you suspect there could be naming conflicts between an attribute that is set on a subclass and a property on the base-class.
You'll need to know at least some basics about metaclasses, because this will not work without using metaclasses (a nice explanation on how class attributes are called can be found in this blog post) because the attribute lookup is slightly different with metaclasses.
Suppose you have:
class Funny(type):
dynprop = 'Very important meta attribute, do not override'
class Fun(metaclass=Funny):
def __init__(self, value):
self._stub = value
#property
def dynprop(self):
return 'Haha, overridden it with {}'.format(self._stub)
and then call:
Fun.dynprop
property at 0x1b3d9fd19a8
and on the instance we get:
Fun(2).dynprop
'Haha, overridden it with 2'
bad ... it's lost. But wait we can use the metaclass special lookup: Let's implement an __getattr__ (fallback) and implement the dynprop as DynamicClassAttribute. Because according to it's documentation that's its purpose - to fallback to the __getattr__ if it's called on the class:
from types import DynamicClassAttribute
class Funny(type):
def __getattr__(self, value):
print('search in meta')
return Funny.dynprop
dynprop = 'Meta'
class Fun(metaclass=Funny):
def __init__(self, value):
self._dynprop = value
#DynamicClassAttribute
def dynprop(self):
return self._dynprop
now we access the class-attribute:
Fun.dynprop
which prints:
search in meta
'Meta'
So we invoked the metaclass.__getattr__ and returned the original attribute (which was defined with the same name as the new property).
And for instances:
Fun('Not-Meta').dynprop
we get the overriden attribute:
'Not-Meta'
Well that's not too bad considering we can reroute using metaclasses to previously defined but overriden attributes without creating an instance. This example is the opposite that is done with Enum, where you define attributes on the subclass:
from enum import Enum
class Fun(Enum):
name = 'me'
age = 28
hair = 'brown'
and want to access these afterwards defined attributes by default.
Fun.name
# <Fun.name: 'me'>
but you also want to allow accessing the name attribute that was defined as DynamicClassAttribute (which returns which name the variable actually has):
Fun('me').name
# 'name'
because otherwise how could you access the name of 28?
Fun.hair.age
# <Fun.age: 28>
# BUT:
Fun.hair.name
# returns 'hair'
See the difference? Why does the second one don't return <Fun.name: 'me'>? That's because of this use of DynamicClassAttribute. So you can shadow the original property but "release" it again later. This behaviour is the reverse of that shown in my example and requires at least the usage of __new__ and __prepare__. But for that you need to know how that exactly works and is explained in a lot of blogs and stackoverflow-answers that can explain it much better than I can so I'll skip going into that much depth (and I'm not sure if I could solve it in short order).
Actual use-cases might be sparse but given time one can propably think of some...
Very nice discussion on the documentation of DynamicClassAttribute: "we added it because we needed it"
What is a DynamicClassAttribute
A DynamicClassAttribute is a descriptor that is similar to property. Dynamic is part of the name because you get different results based on whether you access it via the class or via the instance:
instance access is identical to property and simply runs whatever method was decorated, returning its result
class access raises an AttributeError; when this happens Python then searches every parent class (via the mro) looking for that attribute -- when it doesn't find it, it calls the class' metaclass's __getattr__ for one last shot at finding the attribute. __getattr__ can, of course, do whatever it wants -- in the case of EnumMeta __getattr__ looks in the class' _member_map_ to see if the requested attribute is there, and returns it if it is. As a side note: all that searching had a severe performance impact, which is why we ended up putting all members that did not have name conflicts with DynamicClassAttributes in the Enum class' __dict__ after all.
and how do I use it?
You use it just like you would property -- the only difference is that you use it when creating a base class for other Enums. As an example, the Enum from aenum1 has three reserved names:
name
value
values
values is there to support Enum members with multiple values. That class is effectively:
class Enum(metaclass=EnumMeta):
#DynamicClassAttribute
def name(self):
return self._name_
#DynamicClassAttribute
def value(self):
return self._value_
#DynamicClassAttribute
def values(self):
return self._values_
and now any aenum.Enum can have a values member without messing up Enum.<member>.values.
1 Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
I have a class sysprops in which I'd like to have a number of constants. However, I'd like to pull the values for those constants from the database, so I'd like some sort of hook any time one of these class constants are accessed (something like the getattribute method for instance variables).
class sysprops(object):
SOME_CONSTANT = 'SOME_VALUE'
sysprops.SOME_CONSTANT # this statement would not return 'SOME_VALUE' but instead a dynamic value pulled from the database.
Although I think it is a very bad idea to do this, it is possible:
class GetAttributeMetaClass(type):
def __getattribute__(self, key):
print 'Getting attribute', key
class sysprops(object):
__metaclass__ = GetAttributeMetaClass
While the other two answers have a valid method. I like to take the route of 'least-magic'.
You can do something similar to the metaclass approach without actually using them. Simply by using a decorator.
def instancer(cls):
return cls()
#instancer
class SysProps(object):
def __getattribute__(self, key):
return key # dummy
This will create an instance of SysProps and then assign it back to the SysProps name. Effectively shadowing the actual class definition and allowing a constant instance.
Since decorators are more common in Python I find this way easier to grasp for other people that have to read your code.
sysprops.SOME_CONSTANT can be the return value of a function if SOME_CONSTANT were a property defined on type(sysprops).
In other words, what you are talking about is commonly done if sysprops were an instance instead of a class.
But here is the kicker -- classes are instances of metaclasses. So everything you know about controlling the behavior of instances through the use of classes applies equally well to controlling the behavior of classes through the use of metaclasses.
Usually the metaclass is type, but you are free to define other metaclasses by subclassing type. If you place a property SOME_CONSTANT in the metaclass, then the instance of that metaclass, e.g. sysprops will have the desired behavior when Python evaluates sysprops.SOME_CONSTANT.
class MetaSysProps(type):
#property
def SOME_CONSTANT(cls):
return 'SOME_VALUE'
class SysProps(object):
__metaclass__ = MetaSysProps
print(SysProps.SOME_CONSTANT)
yields
SOME_VALUE
Following this answer it seems that a class' metaclass may be changed after the class has been defined by using the following*:
class MyMetaClass(type):
# Metaclass magic...
class A(object):
pass
A = MyMetaClass(A.__name__, A.__bases__, dict(A.__dict__))
Defining a function
def metaclass_wrapper(cls):
return MyMetaClass(cls.__name__, cls.__bases__, dict(cls.__dict__))
allows me to apply a decorator to a class definition like so,
#metaclass_wrapper
class B(object):
pass
It seems that the metaclass magic is applied to B, however B has no __metaclass__ attribute. Is the above method a sensible way to apply metaclasses to class definitions, even though I am definiting and re-definiting a class, or would I be better off simply writing
class B(object):
__metaclass__ = MyMetaClass
pass
I presume there are some differences between the two methods.
*Note, the original answer in the linked question, MyMetaClass(A.__name__, A.__bases__, A.__dict__), returns a TypeError:
TypeError: type() argument 3 must be a dict, not dict_proxy
It seems that the __dict__ attribute of A (the class definition) has a type dict_proxy, whereas the type of the __dict__ attribute of an instance of A has a type dict. Why is this? Is this a Python 2.x vs. 3.x difference?
Admittedly, I am a bit late to the party. However, I fell this was worth adding.
This is completely doable. That being said, there are plenty of other ways to accomplish the same goal. However, the decoration solution, in particular, allows for delayed evaluation ( obj = dec(obj) ), which using __metaclass__ inside the class does not. In typical decorator style, my solution is below.
There is a tricky thing that you may run into if you just construct the class without changing the dictionary or copying its attributes. Any attributes that the class had previously (before decorating) will appear to be missing. So, it is absolutely essential to copy these over and then tweak them as I have in my solution.
Personally, I like to be able to keep track of how an object was wrapped. So, I added the __wrapped__ attribute, which is not strictly necessary. It also makes it more like functools.wraps in Python 3 for classes. However, it can be helpful with introspection. Also, __metaclass__ is added to act more like the normal metaclass use case.
def metaclass(meta):
def metaclass_wrapper(cls):
__name = str(cls.__name__)
__bases = tuple(cls.__bases__)
__dict = dict(cls.__dict__)
for each_slot in __dict.get("__slots__", tuple()):
__dict.pop(each_slot, None)
__dict["__metaclass__"] = meta
__dict["__wrapped__"] = cls
return(meta(__name, __bases, __dict))
return(metaclass_wrapper)
For a trivial example, take the following.
class MetaStaticVariablePassed(type):
def __new__(meta, name, bases, dct):
dct["passed"] = True
return(super(MetaStaticVariablePassed, meta).__new__(meta, name, bases, dct))
#metaclass(MetaStaticVariablePassed)
class Test(object):
pass
This yields the nice result...
|1> Test.passed
|.> True
Using the decorator in the less usual, but identical way...
class Test(object):
pass
Test = metaclass_wrapper(Test)
...yields, as expected, the same nice result.
|1> Test.passed
|.> True
The class has no __metaclass__ attribute set... because you never set it!
Which metaclass to use is normally determined by a name __metaclass__ set in a class block. The __metaclass__ attribute isn't set by the metaclass. So if you invoke a metaclass directly rather than setting __metaclass__ and letting Python figure it out, then no __metaclass__ attribute is set.
In fact, normal classes are all instances of the metaclass type, so if the metaclass always set the __metaclass__ attribute on its instances then every class would have a __metaclass__ attribute (most of them set to type).
I would not use your decorator approach. It obscures the fact that a metaclass is involved (and which one), is still one line of boilerplate, and it's just messy to create a class from the 3 defining features of (name, bases, attributes) only to pull those 3 bits back out from the resulting class, throw the class away, and make a new class from those same 3 bits!
When you do this in Python 2.x:
class A(object):
__metaclass__ = MyMeta
def __init__(self):
pass
You'd get roughly the same result if you'd written this:
attrs = {}
attrs['__metaclass__'] = MyMeta
def __init__(self):
pass
attrs['__init__'] = __init__
A = attrs.get('__metaclass__', type)('A', (object,), attrs)
In reality calculating the metaclass is more complicated, as there actually has to be a search through all the bases to determine whether there's a metaclass conflict, and if one of the bases doesn't have type as its metaclass and attrs doesn't contain __metaclass__ then the default metaclass is the ancestor's metaclass rather than type. This is one situation where I expect your decorator "solution" will differ from using __metaclass__ directly. I'm not sure exactly what would happen if you used your decorator in a situation where using __metaclass__ would give you a metaclass conflict error, but I wouldn't expect it to be pleasant.
Also, if there are any other metaclasses involved, your method would result in them running first (possibly modifying what the name, bases, and attributes are!) and then pulling those out of the class and using it to create a new class. This could potentially be quite different than what you'd get using __metaclass__.
As for the __dict__ not giving you a real dictionary, that's just an implementation detail; I would guess for performance reasons. I doubt there is any spec that says the __dict__ of a (non-class) instance has to be the same type as the __dict__ of a class (which is also an instance btw; just an instance of a metaclass). The __dict__ attribute of a class is a "dictproxy", which allows you to look up attribute keys as if it were a dict but still isn't a dict. type is picky about the type of its third argument; it wants a real dict, not just a "dict-like" object (shame on it for spoiling duck-typing). It's not a 2.x vs 3.x thing; Python 3 behaves the same way, although it gives you a nicer string representation of the dictproxy. Python 2.4 (which is the oldest 2.x I have readily available) also has dictproxy objects for class __dict__ objects.
My summary of your question: "I tried a new tricky way to do a thing, and it didn't quite work. Should I use the simple way instead?"
Yes, you should do it the simple way. You haven't said why you're interested in inventing a new way to do it.
I'm doing some distributed computing in which several machines communicate under the assumption that they all have identical versions of various classes. Thus, it seems to be good design to make these classes immutable; not in the sense that it must thwart a user with bad intentions, just immutable enough that it is never modified by accident.
How would I go about this? For example, how would I implement a metaclass that makes the class using it immutable after it's definition?
>>> class A(object):
... __metaclass__ = ImmutableMetaclass
>>> A.something = SomethingElse # Don't want this
>>> a = A()
>>> a.something = Whatever # obviously, this is still perfectly fine.
Alternate methods is also fine, such as a decorator/function that takes a class and returns an immutable class.
If the old trick of using __slots__ does not fit you, this, or some variant of thereof can do:
simply write the __setattr__ method of your metaclass to be your guard. In this example, I prevent new attributes of being assigned, but allow modification of existing ones:
def immutable_meta(name, bases, dct):
class Meta(type):
def __init__(cls, name, bases, dct):
type.__setattr__(cls,"attr",set(dct.keys()))
type.__init__(cls, name, bases, dct)
def __setattr__(cls, attr, value):
if attr not in cls.attr:
raise AttributeError ("Cannot assign attributes to this class")
return type.__setattr__(cls, attr, value)
return Meta(name, bases, dct)
class A:
__metaclass__ = immutable_meta
b = "test"
a = A()
a.c = 10 # this works
A.c = 20 # raises valueError
Don't waste time on immutable classes.
There are things you can do that are far, far simpler than messing around with trying to create an immutable object.
Here are five separate techniques. You can pick and choose from among them. Any one will work. Some combinations will work, also.
Documentation. Actually, they won't forget this. Give them credit.
Unit test. Mock your application objects with a simple mock that handles __setattr__ as an exception. Any change to the state of the object is a fail in the unit test. It's easy and doesn't require any elaborate programming.
Override __setattr__ to raise an exception on every attempted write.
collections.namedtuple. They're immutable out of the box.
collections.Mapping. It's immutable, but you do need to implement a few methods to make it work.
If you don't mind reusing someone else's work:
http://packages.python.org/pysistence/
Immutable persistent (in the functional, not write to desk sense) data structures.
Even if you don't use them as is, the source code should provide some inspiration. Their expando class, for example, takes an object in it's constructor and returns an immutable version of it.