Related
Is it possible to get the the namespace parent, or encapsulating type, of a class?
class base:
class sub:
def __init__(self):
# self is "__main__.extra.sub"
# want to create object of type "__main__.extra" from this
pass
class extra(base):
class sub(base.sub):
pass
o = extra.sub()
The problem in base.sub.__init__ is getting extra from the extra.sub.
The only solutions I can think of at the moment involve having all subclasses of base provide some link to their encapsulating class type or turning the type of self in base.sub.__init__ into a string an manipulating it into a new type string. Both a bit ughly.
It's clearly possible to go the other way, type(self()).sub would give you extra.sub from inside base.sub.__init__ for a extra type object, but how do I do .. instead of .sub ? :)
The real answer is that there is no general way to do this. Python classes are normal objects, but they are created a bit differently. A class does not exist until well after its entire body has been executed. Once a class is created, it can be bound to many different names. The only reference it has to where it was created are the __module__ and __qualname__ attributes, but both of these are mutable.
In practice, it is possible to write your example like this:
class Sub:
def __init__(self):
pass
class Base:
Sub = Sub
Sub.__qualname__ = 'Base.Sub'
class Sub(Sub):
pass
class Extra(Base):
Sub = Sub
Sub.__qualname__ = 'Extra.Sub'
del Sub # Unlink from global namespace
Barring the capitalization, this behaves exactly as your original example. Hopefully this clarifies which code has access to what, and shows that the most robust way to determine the enclosing scope of a class is to explicitly assign it somewhere. You can do this in any number of ways. The trivial way is just to assign it. Going back to your original notation:
class Base:
class Sub:
def __init__(self):
print(self.enclosing)
Base.Sub.enclosing = Base
class Extra(Base):
class Sub(Base.Sub):
pass
Extra.Sub.enclosing = Extra
Notice that since Base does not exist when it body is being executed, the assignment has to happen after the classes are both created. You can bypass this by using a metaclass or a decorator. That will allow you to mess with the namespace before the class object is assigned to a name, making the change more transparent.
class NestedMeta(type):
def __init__(cls, name, bases, namespace):
for name, obj in namespace.items():
if isinstance(obj, type):
obj.enclosing = cls
class Base(metaclass=NestedMeta):
class Sub:
def __init__(self):
print(self.enclosing)
class Extra(Base):
class Sub(Base.Sub):
pass
But this is again somewhat unreliable because not all metaclasses are an instance of type, which takes us back to the first statement in this answer.
In many cases, you can use the __qualname__ and __module__ attributes to get the name of the surrounding class:
import sys
cls = type(o)
getattr(sys.modules[cls.__module__], '.'.join(cls.__qualname__.split('.')[:-1]))
This is a very literal answer to your question. It just shows one way of getting the class in the enclosing scope without addressing the probably design flaws that lead to this being necessary in the first place, or any of the many possible corner cases that this would not cover.
I am studying python, and although I think I get the whole concept and notion of Python, today I stumbled upon a piece of code that I did not fully understand:
Say I have a class that is supposed to define Circles but lacks a body:
class Circle():
pass
Since I have not defined any attributes, how can I do this:
my_circle = Circle()
my_circle.radius = 12
The weird part is that Python accepts the above statement. I don't understand why Python doesn't raise an undefined name error. I do understand that via dynamic typing I just bind variables to objects whenever I want, but shouldn't an attribute radius exist in the Circle class to allow me to do this?
EDIT: Lots of wonderful information in your answers! Thank you everyone for all those fantastic answers! It's a pity I only get to mark one as an answer.
A leading principle is that there is no such thing as a declaration. That is, you never declare "this class has a method foo" or "instances of this class have an attribute bar", let alone making a statement about the types of objects to be stored there. You simply define a method, attribute, class, etc. and it's added. As JBernardo points out, any __init__ method does the very same thing. It wouldn't make a lot of sense to arbitrarily restrict creation of new attributes to methods with the name __init__. And it's sometimes useful to store a function as __init__ which don't actually have that name (e.g. decorators), and such a restriction would break that.
Now, this isn't universally true. Builtin types omit this capability as an optimization. Via __slots__, you can also prevent this on user-defined classes. But this is merely a space optimization (no need for a dictionary for every object), not a correctness thing.
If you want a safety net, well, too bad. Python does not offer one, and you cannot reasonably add one, and most importantly, it would be shunned by Python programmers who embrace the language (read: almost all of those you want to work with). Testing and discipline, still go a long way to ensuring correctness. Don't use the liberty to make up attributes outside of __init__ if it can be avoided, and do automated testing. I very rarely have an AttributeError or a logical error due to trickery like this, and of those that happen, almost all are caught by tests.
Just to clarify some misunderstandings in the discussions here. This code:
class Foo(object):
def __init__(self, bar):
self.bar = bar
foo = Foo(5)
And this code:
class Foo(object):
pass
foo = Foo()
foo.bar = 5
is exactly equivalent. There really is no difference. It does exactly the same thing. This difference is that in the first case it's encapsulated and it's clear that the bar attribute is a normal part of Foo-type objects. In the second case it is not clear that this is so.
In the first case you can not create a Foo object that doesn't have the bar attribute (well, you probably can, but not easily), in the second case the Foo objects will not have a bar attribute unless you set it.
So although the code is programatically equivalent, it's used in different cases.
Python lets you store attributes of any name on virtually any instance (or class, for that matter). It's possible to block this either by writing the class in C, like the built-in types, or by using __slots__ which allows only certain names.
The reason it works is that most instances store their attributes in a dictionary. Yes, a regular Python dictionary like you'd define with {}. The dictionary is stored in an instance attribute called __dict__. In fact, some people say "classes are just syntactic sugar for dictionaries." That is, you can do everything you can do with a class with a dictionary; classes just make it easier.
You're used to static languages where you must define all attributes at compile time. In Python, class definitions are executed, not compiled; classes are objects just like any other; and adding attributes is as easy as adding an item to a dictionary. This is why Python is considered a dynamic language.
No, python is flexible like that, it does not enforce what attributes you can store on user-defined classes.
There is a trick however, using the __slots__ attribute on a class definition will prevent you from creating additional attributes not defined in the __slots__ sequence:
>>> class Foo(object):
... __slots__ = ()
...
>>> f = Foo()
>>> f.bar = 'spam'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'bar'
>>> class Foo(object):
... __slots__ = ('bar',)
...
>>> f = Foo()
>>> f.bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: bar
>>> f.bar = 'spam'
It creates a radius data member of my_circle.
If you had asked it for my_circle.radius it would have thrown an exception:
>>> print my_circle.radius # AttributeError
Interestingly, this does not change the class; just that one instance. So:
>>> my_circle = Circle()
>>> my_circle.radius = 5
>>> my_other_circle = Circle()
>>> print my_other_circle.radius # AttributeError
There are two types of attributes in Python - Class Data Attributes and Instance Data Attributes.
Python gives you flexibility of creating Data Attributes on the fly.
Since an instance data attribute is related to an instance, you can also do that in __init__ method or you can do it after you have created your instance..
class Demo(object):
classAttr = 30
def __init__(self):
self.inInit = 10
demo = Demo()
demo.outInit = 20
Demo.new_class_attr = 45; # You can also create class attribute here.
print demo.classAttr # Can access it
del demo.classAttr # Cannot do this.. Should delete only through class
demo.classAttr = 67 # creates an instance attribute for this instance.
del demo.classAttr # Now OK.
print Demo.classAttr
So, you see that we have created two instance attributes, one inside __init__ and one outside, after instance is created..
But a difference is that, the instance attribute created inside __init__ will be set for all the instances, while if created outside, you can have different instance attributes for different isntances..
This is unlike Java, where each Instance of a Class have same set of Instance Variables..
NOTE: - While you can access a class attribute through an instance, you cannot delete it..
Also, if you try to modify a class attribute through an instance, you actually create an instance attribute which shadows the class attribute..
How to prevent new attributes creation ?
Using class
To control the creation of new attributes, you can overwrite the __setattr__ method. It will be called every time my_obj.x = 123 is called.
See the documentation:
class A:
def __init__(self):
# Call object.__setattr__ to bypass the attribute checking
super().__setattr__('x', 123)
def __setattr__(self, name, value):
# Cannot create new attributes
if not hasattr(self, name):
raise AttributeError('Cannot set new attributes')
# Can update existing attributes
super().__setattr__(name, value)
a = A()
a.x = 123 # Allowed
a.y = 456 # raise AttributeError
Note that users can still bypass the checking if they call directly object.__setattr__(a, 'attr_name', attr_value).
Using dataclass
With dataclasses, you can forbid the creation of new attributes with frozen=True. It will also prevent existing attributes to be updated.
#dataclasses.dataclass(frozen=True)
class A:
x: int
a = A(x=123)
a.y = 123 # Raise FrozenInstanceError
a.x = 123 # Raise FrozenInstanceError
Note: dataclasses.FrozenInstanceError is a subclass of AttributeError
To add to Conchylicultor's answer, Python 3.10 added a new parameter to dataclass.
The slots parameter will create the __slots__ attribute in the class, preventing creation of new attributes outside of __init__, but allowing assignments to existing attributes.
If slots=True, assigning to an attribute that was not defined will throw an AttributeError.
Here is an example with slots and with frozen:
from dataclasses import dataclass
#dataclass
class Data:
x:float=0
y:float=0
#dataclass(frozen=True)
class DataFrozen:
x:float=0
y:float=0
#dataclass(slots=True)
class DataSlots:
x:float=0
y:float=0
p = Data(1,2)
p.x = 5 # ok
p.z = 8 # ok
p = DataFrozen(1,2)
p.x = 5 # FrozenInstanceError
p.z = 8 # FrozenInstanceError
p = DataSlots(1,2)
p.x = 5 # ok
p.z = 8 # AttributeError
As delnan said, you can obtain this behavior with the __slots__ attribute. But the fact that it is a way to save memory space and access type does not discard the fact that it is (also) a/the mean to disable dynamic attributes.
Disabling dynamic attributes is a reasonable thing to do, if only to prevent subtle bugs due to spelling mistakes. "Testing and discipline" is fine but relying on automated validation is certainly not wrong either – and not necessarily unpythonic either.
Also, since the attrs library reached version 16 in 2016 (obviously way after the original question and answers), creating a closed class with slots has never been easier.
>>> import attr
...
... #attr.s(slots=True)
... class Circle:
... radius = attr.ib()
...
... f = Circle(radius=2)
... f.color = 'red'
AttributeError: 'Circle' object has no attribute 'color'
I have a simple Python class:
class Car:
self.dirty = False
self.owner = 'Alice'
self.wheels = []
def __setattr__(self, name, value):
self.dirty = True
super(Car, self).__setattr__()
After some experimenting, I see __setattr__ is called only when setting owner or wheels:
car_instance.owner = 'Bob'
car_instance.wheels = []
It does not get called when appending to wheels:
wheels.append(wheel_instance)
This does not surprise me, and I understand why it is not being called.
I am just wondering how I would get it to be called for the 3 scenarios I listed:
car_instance.owner = 'Bob' # SCENARIO 1
car_instance.wheels = [] # SCENARIO 2
wheels.append(wheel_instance) # SCENARIO 3
I've experimented a bit with the different descriptors, but no luck. I ultimatley just want to set dirty = True when a class member is modified (set, reset, modified, appended to, etc.).
You cannot do this using only descriptors. Full stop.
You have to provide a custom list class which does what you want. This is not too difficult if your custom list inherits collections.abc.MutableSequence. As you can see, you can get away by "only" implementing __getitem__, __setitem__, __delitem__, __len__, insert—the others are filled in by the abstract base class MutableSequence.
Use a normal list as backing storage and implement the methods using that.
Note that the index argument to __setitem__, __getitem__ and __delitem__ can be a slice, which are more tricky to implement than you’d expect. I recommend a tight test suite.
Once you have your list class, you use it as the type for your class’ attributes (you can control the type using #property or custom descriptors, by preventing the user from assigning any other type).
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.
I am studying python, and although I think I get the whole concept and notion of Python, today I stumbled upon a piece of code that I did not fully understand:
Say I have a class that is supposed to define Circles but lacks a body:
class Circle():
pass
Since I have not defined any attributes, how can I do this:
my_circle = Circle()
my_circle.radius = 12
The weird part is that Python accepts the above statement. I don't understand why Python doesn't raise an undefined name error. I do understand that via dynamic typing I just bind variables to objects whenever I want, but shouldn't an attribute radius exist in the Circle class to allow me to do this?
EDIT: Lots of wonderful information in your answers! Thank you everyone for all those fantastic answers! It's a pity I only get to mark one as an answer.
A leading principle is that there is no such thing as a declaration. That is, you never declare "this class has a method foo" or "instances of this class have an attribute bar", let alone making a statement about the types of objects to be stored there. You simply define a method, attribute, class, etc. and it's added. As JBernardo points out, any __init__ method does the very same thing. It wouldn't make a lot of sense to arbitrarily restrict creation of new attributes to methods with the name __init__. And it's sometimes useful to store a function as __init__ which don't actually have that name (e.g. decorators), and such a restriction would break that.
Now, this isn't universally true. Builtin types omit this capability as an optimization. Via __slots__, you can also prevent this on user-defined classes. But this is merely a space optimization (no need for a dictionary for every object), not a correctness thing.
If you want a safety net, well, too bad. Python does not offer one, and you cannot reasonably add one, and most importantly, it would be shunned by Python programmers who embrace the language (read: almost all of those you want to work with). Testing and discipline, still go a long way to ensuring correctness. Don't use the liberty to make up attributes outside of __init__ if it can be avoided, and do automated testing. I very rarely have an AttributeError or a logical error due to trickery like this, and of those that happen, almost all are caught by tests.
Just to clarify some misunderstandings in the discussions here. This code:
class Foo(object):
def __init__(self, bar):
self.bar = bar
foo = Foo(5)
And this code:
class Foo(object):
pass
foo = Foo()
foo.bar = 5
is exactly equivalent. There really is no difference. It does exactly the same thing. This difference is that in the first case it's encapsulated and it's clear that the bar attribute is a normal part of Foo-type objects. In the second case it is not clear that this is so.
In the first case you can not create a Foo object that doesn't have the bar attribute (well, you probably can, but not easily), in the second case the Foo objects will not have a bar attribute unless you set it.
So although the code is programatically equivalent, it's used in different cases.
Python lets you store attributes of any name on virtually any instance (or class, for that matter). It's possible to block this either by writing the class in C, like the built-in types, or by using __slots__ which allows only certain names.
The reason it works is that most instances store their attributes in a dictionary. Yes, a regular Python dictionary like you'd define with {}. The dictionary is stored in an instance attribute called __dict__. In fact, some people say "classes are just syntactic sugar for dictionaries." That is, you can do everything you can do with a class with a dictionary; classes just make it easier.
You're used to static languages where you must define all attributes at compile time. In Python, class definitions are executed, not compiled; classes are objects just like any other; and adding attributes is as easy as adding an item to a dictionary. This is why Python is considered a dynamic language.
No, python is flexible like that, it does not enforce what attributes you can store on user-defined classes.
There is a trick however, using the __slots__ attribute on a class definition will prevent you from creating additional attributes not defined in the __slots__ sequence:
>>> class Foo(object):
... __slots__ = ()
...
>>> f = Foo()
>>> f.bar = 'spam'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'bar'
>>> class Foo(object):
... __slots__ = ('bar',)
...
>>> f = Foo()
>>> f.bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: bar
>>> f.bar = 'spam'
It creates a radius data member of my_circle.
If you had asked it for my_circle.radius it would have thrown an exception:
>>> print my_circle.radius # AttributeError
Interestingly, this does not change the class; just that one instance. So:
>>> my_circle = Circle()
>>> my_circle.radius = 5
>>> my_other_circle = Circle()
>>> print my_other_circle.radius # AttributeError
There are two types of attributes in Python - Class Data Attributes and Instance Data Attributes.
Python gives you flexibility of creating Data Attributes on the fly.
Since an instance data attribute is related to an instance, you can also do that in __init__ method or you can do it after you have created your instance..
class Demo(object):
classAttr = 30
def __init__(self):
self.inInit = 10
demo = Demo()
demo.outInit = 20
Demo.new_class_attr = 45; # You can also create class attribute here.
print demo.classAttr # Can access it
del demo.classAttr # Cannot do this.. Should delete only through class
demo.classAttr = 67 # creates an instance attribute for this instance.
del demo.classAttr # Now OK.
print Demo.classAttr
So, you see that we have created two instance attributes, one inside __init__ and one outside, after instance is created..
But a difference is that, the instance attribute created inside __init__ will be set for all the instances, while if created outside, you can have different instance attributes for different isntances..
This is unlike Java, where each Instance of a Class have same set of Instance Variables..
NOTE: - While you can access a class attribute through an instance, you cannot delete it..
Also, if you try to modify a class attribute through an instance, you actually create an instance attribute which shadows the class attribute..
How to prevent new attributes creation ?
Using class
To control the creation of new attributes, you can overwrite the __setattr__ method. It will be called every time my_obj.x = 123 is called.
See the documentation:
class A:
def __init__(self):
# Call object.__setattr__ to bypass the attribute checking
super().__setattr__('x', 123)
def __setattr__(self, name, value):
# Cannot create new attributes
if not hasattr(self, name):
raise AttributeError('Cannot set new attributes')
# Can update existing attributes
super().__setattr__(name, value)
a = A()
a.x = 123 # Allowed
a.y = 456 # raise AttributeError
Note that users can still bypass the checking if they call directly object.__setattr__(a, 'attr_name', attr_value).
Using dataclass
With dataclasses, you can forbid the creation of new attributes with frozen=True. It will also prevent existing attributes to be updated.
#dataclasses.dataclass(frozen=True)
class A:
x: int
a = A(x=123)
a.y = 123 # Raise FrozenInstanceError
a.x = 123 # Raise FrozenInstanceError
Note: dataclasses.FrozenInstanceError is a subclass of AttributeError
To add to Conchylicultor's answer, Python 3.10 added a new parameter to dataclass.
The slots parameter will create the __slots__ attribute in the class, preventing creation of new attributes outside of __init__, but allowing assignments to existing attributes.
If slots=True, assigning to an attribute that was not defined will throw an AttributeError.
Here is an example with slots and with frozen:
from dataclasses import dataclass
#dataclass
class Data:
x:float=0
y:float=0
#dataclass(frozen=True)
class DataFrozen:
x:float=0
y:float=0
#dataclass(slots=True)
class DataSlots:
x:float=0
y:float=0
p = Data(1,2)
p.x = 5 # ok
p.z = 8 # ok
p = DataFrozen(1,2)
p.x = 5 # FrozenInstanceError
p.z = 8 # FrozenInstanceError
p = DataSlots(1,2)
p.x = 5 # ok
p.z = 8 # AttributeError
As delnan said, you can obtain this behavior with the __slots__ attribute. But the fact that it is a way to save memory space and access type does not discard the fact that it is (also) a/the mean to disable dynamic attributes.
Disabling dynamic attributes is a reasonable thing to do, if only to prevent subtle bugs due to spelling mistakes. "Testing and discipline" is fine but relying on automated validation is certainly not wrong either – and not necessarily unpythonic either.
Also, since the attrs library reached version 16 in 2016 (obviously way after the original question and answers), creating a closed class with slots has never been easier.
>>> import attr
...
... #attr.s(slots=True)
... class Circle:
... radius = attr.ib()
...
... f = Circle(radius=2)
... f.color = 'red'
AttributeError: 'Circle' object has no attribute 'color'