For a class Foo,
class Foo:
...
is there a way to call a specific method whenever Foo.XXX (an arbitrary class attribute XXX, such as bar, bar2, etc.) is being resolved, but NOT whenever Foo().XXX (an arbitrary instance attribute) is resolved? To the best of my knowledge, overriding
def __getattr__(self, a):
...
applies to the second case only. I'd like to benefit from the fact that __getattr__ for instances is called only when the corresponding attribute is not found. Does anyone know how this should be solved? I've read through the manual and found nothing of relevance.
I don't particularly care what happens if someone tries to assign a value to any class or instance attribute of Foo.
The use case (to avoid XY):
Class Foo is a messenger. Instances of Foo are basically tuples with extra steps. However, there are special cases of Foo, which I'd like to store as class attributes of Foo (I managed that using a special decorator). However, the number of special cases of Foo is gradually growing while they differ very insignificantly. It would be very efficient code-wise for me to have this special function called whenever someone wants Foo.SPECIAL_CASE_123, because mapping from "SPECIAL_CASE_123" to the actual special case is very fast and very similar to mapping "SPECIAL_CASE_456" to the other corresponding special case.
Use a metaclass to define how classes behave. In specific, redefine the class' __getattr__ to change how attributes missing on the class are handled:
# The "Type of Foo"
class MetaFoo(type):
def __getattr__(cls, item):
return f"Dynamically handled: {item}"
class Foo(metaclass=MetaFoo):
REGULAR_CASE_22 = "Catch 22"
print(Foo.REGULAR_CASE_22) # Catch 22
print(Foo.SPECIAL_CASE_123) # Dynamically handled: SPECIAL_CASE_123
print(Foo().REGULAR_CASE_22) # Catch 22
print(Foo().SPECIAL_CASE_123) # AttributeError: 'Foo' object has no attribute 'SPECIAL_CASE_123'
Note that certain attribute lookups, such as for special methods like __add__, will circumvent the metaclass attribute lookup.
You could try it with #classmethods:
class Foo:
#classmethod
def XXX(self, a):
return a
print(Foo.XXX('Hello World'))
Output:
Hello World
Related
I have a class with a private constant _BAR = object().
In a child class, outside of a method (no access to self), I want to refer to _BAR.
Here is a contrived example:
class Foo:
_BAR = object()
def __init__(self, bar: object = _BAR):
...
class DFoo(Foo):
"""Child class where I want to access private class variable from parent."""
def __init__(self, baz: object = super()._BAR):
super().__init__(baz)
Unfortunately, this doesn't work. One gets an error: RuntimeError: super(): no arguments
Is there a way to use super outside of a method to get a parent class attribute?
The workaround is to use Foo._BAR, I am wondering though if one can use super to solve this problem.
Inside of DFoo, you cannot refer to Foo._BAR without referring to Foo. Python variables are searched in the local, enclosing, global and built-in scopes (and in this order, it is the so called LEGB rule) and _BAR is not present in any of them.
Let's ignore an explicit Foo._BAR.
Further, it gets inherited: DFoo._BAR will be looked up first in DFoo, and when not found, in Foo.
What other means are there to get the Foo reference? Foo is a base class of DFoo. Can we use this relationship? Yes and no. Yes at execution time and no at definition time.
The problem is when the DFoo is being defined, it does not exist yet. We have no start point to start following the inheritance chain. This rules out an indirect reference (DFoo -> Foo) in a def method(self, ....): line and in a class attribute _DBAR = _BAR.
It is possible to work around this limitation using a class decorator. Define the class and then modify it:
def deco(cls):
cls._BAR = cls.__mro__[1]._BAR * 2 # __mro__[0] is the class itself
return cls
class Foo:
_BAR = 10
#deco
class DFoo(Foo):
pass
print(Foo._BAR, DFoo._BAR) # 10 20
Similar effect can be achieved with a metaclass.
The last option to get a reference to Foo is at execution time. We have the object self, its type is DFoo, and its parent type is Foo and there exists the _BAR. The well known super() is a shortcut to get the parent.
I have assumed only one base class for simplicity. If there were several base classes, super() returns only one of them. The example class decorator does the same. To understand how several bases are sorted to a sequence, see how the MRO works (Method Resolution Order).
My final thought is that I could not think up a use-case where such access as in the question would be required.
Short answer: you can't !
I'm not going into much details about super class itself here. (I've written a pure Python implementation in this gist if you like to read.)
But now let's see how we can call super:
1- Without arguments:
From PEP 3135:
This PEP proposes syntactic sugar for use of the super type to
automatically construct instances of the super type binding to the
class that a method was defined in, and the instance (or class object
for classmethods) that the method is currently acting upon.
The new syntax:
super()
is equivalent to:
super(__class__, <firstarg>)
...and <firstarg> is the first parameter of the method
So this is not an option because you don't have access to the "instance".
(Body of the function/methods is not executed unless it gets called, so no problem if DFoo doesn't exist yet inside the method definition)
2- super(type, instance)
From documentation:
The zero argument form only works inside a class definition, as the
compiler fills in the necessary details to correctly retrieve the
class being defined, as well as accessing the current instance for
ordinary methods.
What were those necessary details mentioned above? A "type" and A "instance":
We can't pass neither "instance" nor "type" which is DFoo here. The first one is because it's not inside the method so we don't have access to instance(self). Second one is DFoo itself. By the time the body of the DFoo class is being executed there is no reference to DFoo, it doesn't exist yet. The body of the class is executed inside a namespace which is a dictionary. After that a new instance of type type which is here named DFoo is created using that populated dictionary and added to the global namespaces. That's what class keyword roughly does in its simple form.
3- super(type, type):
If the second argument is a type, issubclass(type2, type) must be
true
Same reason mentioned in above about accessing the DFoo.
4- super(type):
If the second argument is omitted, the super object returned is
unbound.
If you have an unbound super object you can't do lookup(unless for the super object's attributes itself). Remember super() object is a descriptor. You can turn an unbound object to a bound object by calling __get__ and passing the instance:
class A:
a = 1
class B(A):
pass
class C(B):
sup = super(B)
try:
sup.a
except AttributeError as e:
print(e) # 'super' object has no attribute 'a'
obj = C()
print(obj.sup.a) # 1
obj.sup automatically calls the __get__.
And again same reason about accessing DFoo type mentioned above, nothing changed. Just added for records. These are the ways how we can call super.
I am studying python, and although I think I get the whole concept and notion of Python, today I stumbled upon a piece of code that I did not fully understand:
Say I have a class that is supposed to define Circles but lacks a body:
class Circle():
pass
Since I have not defined any attributes, how can I do this:
my_circle = Circle()
my_circle.radius = 12
The weird part is that Python accepts the above statement. I don't understand why Python doesn't raise an undefined name error. I do understand that via dynamic typing I just bind variables to objects whenever I want, but shouldn't an attribute radius exist in the Circle class to allow me to do this?
EDIT: Lots of wonderful information in your answers! Thank you everyone for all those fantastic answers! It's a pity I only get to mark one as an answer.
A leading principle is that there is no such thing as a declaration. That is, you never declare "this class has a method foo" or "instances of this class have an attribute bar", let alone making a statement about the types of objects to be stored there. You simply define a method, attribute, class, etc. and it's added. As JBernardo points out, any __init__ method does the very same thing. It wouldn't make a lot of sense to arbitrarily restrict creation of new attributes to methods with the name __init__. And it's sometimes useful to store a function as __init__ which don't actually have that name (e.g. decorators), and such a restriction would break that.
Now, this isn't universally true. Builtin types omit this capability as an optimization. Via __slots__, you can also prevent this on user-defined classes. But this is merely a space optimization (no need for a dictionary for every object), not a correctness thing.
If you want a safety net, well, too bad. Python does not offer one, and you cannot reasonably add one, and most importantly, it would be shunned by Python programmers who embrace the language (read: almost all of those you want to work with). Testing and discipline, still go a long way to ensuring correctness. Don't use the liberty to make up attributes outside of __init__ if it can be avoided, and do automated testing. I very rarely have an AttributeError or a logical error due to trickery like this, and of those that happen, almost all are caught by tests.
Just to clarify some misunderstandings in the discussions here. This code:
class Foo(object):
def __init__(self, bar):
self.bar = bar
foo = Foo(5)
And this code:
class Foo(object):
pass
foo = Foo()
foo.bar = 5
is exactly equivalent. There really is no difference. It does exactly the same thing. This difference is that in the first case it's encapsulated and it's clear that the bar attribute is a normal part of Foo-type objects. In the second case it is not clear that this is so.
In the first case you can not create a Foo object that doesn't have the bar attribute (well, you probably can, but not easily), in the second case the Foo objects will not have a bar attribute unless you set it.
So although the code is programatically equivalent, it's used in different cases.
Python lets you store attributes of any name on virtually any instance (or class, for that matter). It's possible to block this either by writing the class in C, like the built-in types, or by using __slots__ which allows only certain names.
The reason it works is that most instances store their attributes in a dictionary. Yes, a regular Python dictionary like you'd define with {}. The dictionary is stored in an instance attribute called __dict__. In fact, some people say "classes are just syntactic sugar for dictionaries." That is, you can do everything you can do with a class with a dictionary; classes just make it easier.
You're used to static languages where you must define all attributes at compile time. In Python, class definitions are executed, not compiled; classes are objects just like any other; and adding attributes is as easy as adding an item to a dictionary. This is why Python is considered a dynamic language.
No, python is flexible like that, it does not enforce what attributes you can store on user-defined classes.
There is a trick however, using the __slots__ attribute on a class definition will prevent you from creating additional attributes not defined in the __slots__ sequence:
>>> class Foo(object):
... __slots__ = ()
...
>>> f = Foo()
>>> f.bar = 'spam'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'bar'
>>> class Foo(object):
... __slots__ = ('bar',)
...
>>> f = Foo()
>>> f.bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: bar
>>> f.bar = 'spam'
It creates a radius data member of my_circle.
If you had asked it for my_circle.radius it would have thrown an exception:
>>> print my_circle.radius # AttributeError
Interestingly, this does not change the class; just that one instance. So:
>>> my_circle = Circle()
>>> my_circle.radius = 5
>>> my_other_circle = Circle()
>>> print my_other_circle.radius # AttributeError
There are two types of attributes in Python - Class Data Attributes and Instance Data Attributes.
Python gives you flexibility of creating Data Attributes on the fly.
Since an instance data attribute is related to an instance, you can also do that in __init__ method or you can do it after you have created your instance..
class Demo(object):
classAttr = 30
def __init__(self):
self.inInit = 10
demo = Demo()
demo.outInit = 20
Demo.new_class_attr = 45; # You can also create class attribute here.
print demo.classAttr # Can access it
del demo.classAttr # Cannot do this.. Should delete only through class
demo.classAttr = 67 # creates an instance attribute for this instance.
del demo.classAttr # Now OK.
print Demo.classAttr
So, you see that we have created two instance attributes, one inside __init__ and one outside, after instance is created..
But a difference is that, the instance attribute created inside __init__ will be set for all the instances, while if created outside, you can have different instance attributes for different isntances..
This is unlike Java, where each Instance of a Class have same set of Instance Variables..
NOTE: - While you can access a class attribute through an instance, you cannot delete it..
Also, if you try to modify a class attribute through an instance, you actually create an instance attribute which shadows the class attribute..
How to prevent new attributes creation ?
Using class
To control the creation of new attributes, you can overwrite the __setattr__ method. It will be called every time my_obj.x = 123 is called.
See the documentation:
class A:
def __init__(self):
# Call object.__setattr__ to bypass the attribute checking
super().__setattr__('x', 123)
def __setattr__(self, name, value):
# Cannot create new attributes
if not hasattr(self, name):
raise AttributeError('Cannot set new attributes')
# Can update existing attributes
super().__setattr__(name, value)
a = A()
a.x = 123 # Allowed
a.y = 456 # raise AttributeError
Note that users can still bypass the checking if they call directly object.__setattr__(a, 'attr_name', attr_value).
Using dataclass
With dataclasses, you can forbid the creation of new attributes with frozen=True. It will also prevent existing attributes to be updated.
#dataclasses.dataclass(frozen=True)
class A:
x: int
a = A(x=123)
a.y = 123 # Raise FrozenInstanceError
a.x = 123 # Raise FrozenInstanceError
Note: dataclasses.FrozenInstanceError is a subclass of AttributeError
To add to Conchylicultor's answer, Python 3.10 added a new parameter to dataclass.
The slots parameter will create the __slots__ attribute in the class, preventing creation of new attributes outside of __init__, but allowing assignments to existing attributes.
If slots=True, assigning to an attribute that was not defined will throw an AttributeError.
Here is an example with slots and with frozen:
from dataclasses import dataclass
#dataclass
class Data:
x:float=0
y:float=0
#dataclass(frozen=True)
class DataFrozen:
x:float=0
y:float=0
#dataclass(slots=True)
class DataSlots:
x:float=0
y:float=0
p = Data(1,2)
p.x = 5 # ok
p.z = 8 # ok
p = DataFrozen(1,2)
p.x = 5 # FrozenInstanceError
p.z = 8 # FrozenInstanceError
p = DataSlots(1,2)
p.x = 5 # ok
p.z = 8 # AttributeError
As delnan said, you can obtain this behavior with the __slots__ attribute. But the fact that it is a way to save memory space and access type does not discard the fact that it is (also) a/the mean to disable dynamic attributes.
Disabling dynamic attributes is a reasonable thing to do, if only to prevent subtle bugs due to spelling mistakes. "Testing and discipline" is fine but relying on automated validation is certainly not wrong either – and not necessarily unpythonic either.
Also, since the attrs library reached version 16 in 2016 (obviously way after the original question and answers), creating a closed class with slots has never been easier.
>>> import attr
...
... #attr.s(slots=True)
... class Circle:
... radius = attr.ib()
...
... f = Circle(radius=2)
... f.color = 'red'
AttributeError: 'Circle' object has no attribute 'color'
I was searching for the meaning of default parameters object,self that are present as default class and function parameters, so moving away from it, if we are calling an attribute of a class should we use Foo (class reference) or should we use Foo() (instance of the class).
If you are reading a normal attribute then it doesn't matter. If you are binding a normal attribute then you must use the correct one in order for the code to work. If you are accessing a descriptor then you must use an instance.
The details of python's class semantics are quite well documented in the data model. Especially the __get__ semantics are at work here. Instances basically stack their namespace on top of their class' namespace and add some boilerplate for calling methods.
There are some large "it depends on what you are doing" gotchas at work here. The most important question: do you want to access class or instance attributes? Second, do you want attribute or methods?
Let's take this example:
class Foo(object):
bar = 1
baz = 2
def __init__(self, foobar="barfoo", baz=3):
self.foobar = foobar
self.baz = baz
def meth(self, param):
print self, param
#classmethod
def clsmeth(cls, param):
print cls, param
#staticmethod
def stcmeth(param):
print param
Here, bar is a class attribute, so you can get it via Foo.bar. Since instances have implicit access to their class namespace, you can also get it as Foo().bar. foobar is an instance attribute, since it is never bound to the class (only instances, i.e. selfs) - you can only get it as Foo().foobar. Last, baz is both a class and an instance attribute. By default, Foo.baz == 2 and Foo().baz == 3, since the class attribute is hidden by the instance attribute set in __init__.
Similarly, in an assignment there are slight differences whether you work on the class or an instance. Foo.bar=2 will set the class attribute (also for all instances) while Foo().bar=2 will create an instance attribute that shadows the class attribute for this specific instance.
For methods, it is somewhat similar. However, here you get the implicit self parameter for instance method (what a function is if defined for a class). Basically, the call Foo().meth(param=x) is silently translated to Foo.meth(self=Foo(), param=x). This is why it is usually not valid to call Foo.meth(param=x) - meth is not "bound" to an instance and thus lacks the self parameter.
Now, sometimes you do not need any instance data in a method - for example, you have strict string transformation that is an implementation detail of a larger parser class. This is where #classmethod and #staticmethod come into play. A classmethod's first parameter is always the class, as opposed to the instance for regular methods. Foo().clsmeth(param=x) and Foo.clsmeth(param=x) result in a call of clsmethod(cls=Foo, param=x). Here, the two are equivalent. Going one step further, a staticmethod doesn't get any class or instance information - it is like a raw function bound to the classes namespace.
I am studying python, and although I think I get the whole concept and notion of Python, today I stumbled upon a piece of code that I did not fully understand:
Say I have a class that is supposed to define Circles but lacks a body:
class Circle():
pass
Since I have not defined any attributes, how can I do this:
my_circle = Circle()
my_circle.radius = 12
The weird part is that Python accepts the above statement. I don't understand why Python doesn't raise an undefined name error. I do understand that via dynamic typing I just bind variables to objects whenever I want, but shouldn't an attribute radius exist in the Circle class to allow me to do this?
EDIT: Lots of wonderful information in your answers! Thank you everyone for all those fantastic answers! It's a pity I only get to mark one as an answer.
A leading principle is that there is no such thing as a declaration. That is, you never declare "this class has a method foo" or "instances of this class have an attribute bar", let alone making a statement about the types of objects to be stored there. You simply define a method, attribute, class, etc. and it's added. As JBernardo points out, any __init__ method does the very same thing. It wouldn't make a lot of sense to arbitrarily restrict creation of new attributes to methods with the name __init__. And it's sometimes useful to store a function as __init__ which don't actually have that name (e.g. decorators), and such a restriction would break that.
Now, this isn't universally true. Builtin types omit this capability as an optimization. Via __slots__, you can also prevent this on user-defined classes. But this is merely a space optimization (no need for a dictionary for every object), not a correctness thing.
If you want a safety net, well, too bad. Python does not offer one, and you cannot reasonably add one, and most importantly, it would be shunned by Python programmers who embrace the language (read: almost all of those you want to work with). Testing and discipline, still go a long way to ensuring correctness. Don't use the liberty to make up attributes outside of __init__ if it can be avoided, and do automated testing. I very rarely have an AttributeError or a logical error due to trickery like this, and of those that happen, almost all are caught by tests.
Just to clarify some misunderstandings in the discussions here. This code:
class Foo(object):
def __init__(self, bar):
self.bar = bar
foo = Foo(5)
And this code:
class Foo(object):
pass
foo = Foo()
foo.bar = 5
is exactly equivalent. There really is no difference. It does exactly the same thing. This difference is that in the first case it's encapsulated and it's clear that the bar attribute is a normal part of Foo-type objects. In the second case it is not clear that this is so.
In the first case you can not create a Foo object that doesn't have the bar attribute (well, you probably can, but not easily), in the second case the Foo objects will not have a bar attribute unless you set it.
So although the code is programatically equivalent, it's used in different cases.
Python lets you store attributes of any name on virtually any instance (or class, for that matter). It's possible to block this either by writing the class in C, like the built-in types, or by using __slots__ which allows only certain names.
The reason it works is that most instances store their attributes in a dictionary. Yes, a regular Python dictionary like you'd define with {}. The dictionary is stored in an instance attribute called __dict__. In fact, some people say "classes are just syntactic sugar for dictionaries." That is, you can do everything you can do with a class with a dictionary; classes just make it easier.
You're used to static languages where you must define all attributes at compile time. In Python, class definitions are executed, not compiled; classes are objects just like any other; and adding attributes is as easy as adding an item to a dictionary. This is why Python is considered a dynamic language.
No, python is flexible like that, it does not enforce what attributes you can store on user-defined classes.
There is a trick however, using the __slots__ attribute on a class definition will prevent you from creating additional attributes not defined in the __slots__ sequence:
>>> class Foo(object):
... __slots__ = ()
...
>>> f = Foo()
>>> f.bar = 'spam'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'bar'
>>> class Foo(object):
... __slots__ = ('bar',)
...
>>> f = Foo()
>>> f.bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: bar
>>> f.bar = 'spam'
It creates a radius data member of my_circle.
If you had asked it for my_circle.radius it would have thrown an exception:
>>> print my_circle.radius # AttributeError
Interestingly, this does not change the class; just that one instance. So:
>>> my_circle = Circle()
>>> my_circle.radius = 5
>>> my_other_circle = Circle()
>>> print my_other_circle.radius # AttributeError
There are two types of attributes in Python - Class Data Attributes and Instance Data Attributes.
Python gives you flexibility of creating Data Attributes on the fly.
Since an instance data attribute is related to an instance, you can also do that in __init__ method or you can do it after you have created your instance..
class Demo(object):
classAttr = 30
def __init__(self):
self.inInit = 10
demo = Demo()
demo.outInit = 20
Demo.new_class_attr = 45; # You can also create class attribute here.
print demo.classAttr # Can access it
del demo.classAttr # Cannot do this.. Should delete only through class
demo.classAttr = 67 # creates an instance attribute for this instance.
del demo.classAttr # Now OK.
print Demo.classAttr
So, you see that we have created two instance attributes, one inside __init__ and one outside, after instance is created..
But a difference is that, the instance attribute created inside __init__ will be set for all the instances, while if created outside, you can have different instance attributes for different isntances..
This is unlike Java, where each Instance of a Class have same set of Instance Variables..
NOTE: - While you can access a class attribute through an instance, you cannot delete it..
Also, if you try to modify a class attribute through an instance, you actually create an instance attribute which shadows the class attribute..
How to prevent new attributes creation ?
Using class
To control the creation of new attributes, you can overwrite the __setattr__ method. It will be called every time my_obj.x = 123 is called.
See the documentation:
class A:
def __init__(self):
# Call object.__setattr__ to bypass the attribute checking
super().__setattr__('x', 123)
def __setattr__(self, name, value):
# Cannot create new attributes
if not hasattr(self, name):
raise AttributeError('Cannot set new attributes')
# Can update existing attributes
super().__setattr__(name, value)
a = A()
a.x = 123 # Allowed
a.y = 456 # raise AttributeError
Note that users can still bypass the checking if they call directly object.__setattr__(a, 'attr_name', attr_value).
Using dataclass
With dataclasses, you can forbid the creation of new attributes with frozen=True. It will also prevent existing attributes to be updated.
#dataclasses.dataclass(frozen=True)
class A:
x: int
a = A(x=123)
a.y = 123 # Raise FrozenInstanceError
a.x = 123 # Raise FrozenInstanceError
Note: dataclasses.FrozenInstanceError is a subclass of AttributeError
To add to Conchylicultor's answer, Python 3.10 added a new parameter to dataclass.
The slots parameter will create the __slots__ attribute in the class, preventing creation of new attributes outside of __init__, but allowing assignments to existing attributes.
If slots=True, assigning to an attribute that was not defined will throw an AttributeError.
Here is an example with slots and with frozen:
from dataclasses import dataclass
#dataclass
class Data:
x:float=0
y:float=0
#dataclass(frozen=True)
class DataFrozen:
x:float=0
y:float=0
#dataclass(slots=True)
class DataSlots:
x:float=0
y:float=0
p = Data(1,2)
p.x = 5 # ok
p.z = 8 # ok
p = DataFrozen(1,2)
p.x = 5 # FrozenInstanceError
p.z = 8 # FrozenInstanceError
p = DataSlots(1,2)
p.x = 5 # ok
p.z = 8 # AttributeError
As delnan said, you can obtain this behavior with the __slots__ attribute. But the fact that it is a way to save memory space and access type does not discard the fact that it is (also) a/the mean to disable dynamic attributes.
Disabling dynamic attributes is a reasonable thing to do, if only to prevent subtle bugs due to spelling mistakes. "Testing and discipline" is fine but relying on automated validation is certainly not wrong either – and not necessarily unpythonic either.
Also, since the attrs library reached version 16 in 2016 (obviously way after the original question and answers), creating a closed class with slots has never been easier.
>>> import attr
...
... #attr.s(slots=True)
... class Circle:
... radius = attr.ib()
...
... f = Circle(radius=2)
... f.color = 'red'
AttributeError: 'Circle' object has no attribute 'color'
I just spent too long on a bug like the following:
>>> class Odp():
def __init__(self):
self.foo = "bar"
>>> o = Odp()
>>> o.raw_foo = 3 # oops - meant o.foo
I have a class with an attribute. I was trying to set it, and wondering why it had no effect. Then, I went back to the original class definition, and saw that the attribute was named something slightly different. Thus, I was creating/setting a new attribute instead of the one meant to.
First off, isn't this exactly the type of error that statically-typed languages are supposed to prevent? In this case, what is the advantage of dynamic typing?
Secondly, is there a way I could have forbidden this when defining Odp, and thus saved myself the trouble?
You can implement a __setattr__ method for the purpose -- that's much more robust than the __slots__ which is often misused for the purpose (for example, __slots__ is automatically "lost" when the class is inherited from, while __setattr__ survives unless explicitly overridden).
def __setattr__(self, name, value):
if hasattr(self, name):
object.__setattr__(self, name, value)
else:
raise TypeError('Cannot set name %r on object of type %s' % (
name, self.__class__.__name__))
You'll have to make sure the hasattr succeeds for the names you do want to be able to set, for example by setting the attributes at a class level or by using object.__setattr__ in your __init__ method rather than direct attribute assignment. (To forbid setting attributes on a class rather than its instances you'll have to define a custom metaclass with a similar special method).