What are some (concrete) use-cases for metaclasses? - python

I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
Sometimes when using a third-party
library it is useful to be able to
mutate the class in a certain way.
(This is the only case I can think of, and it's not concrete)

I was asked the same question recently, and came up with several answers. I hope it's OK to revive this thread, as I wanted to elaborate on a few of the use cases mentioned, and add a few new ones.
Most metaclasses I've seen do one of two things:
Registration (adding a class to a data structure):
models = {}
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
models[name] = cls = type.__new__(meta, name, bases, attrs)
return cls
class Model(object):
__metaclass__ = ModelMetaclass
Whenever you subclass Model, your class is registered in the models dictionary:
>>> class A(Model):
... pass
...
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...>,
'B': <__main__.B class at 0x...>}
This can also be done with class decorators:
models = {}
def model(cls):
models[cls.__name__] = cls
return cls
#model
class A(object):
pass
Or with an explicit registration function:
models = {}
def register_model(cls):
models[cls.__name__] = cls
class A(object):
pass
register_model(A)
Actually, this is pretty much the same: you mention class decorators unfavorably, but it's really nothing more than syntactic sugar for a function invocation on a class, so there's no magic about it.
Anyway, the advantage of metaclasses in this case is inheritance, as they work for any subclasses, whereas the other solutions only work for subclasses explicitly decorated or registered.
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...> # No B :(
Refactoring (modifying class attributes or adding new ones):
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
fields = {}
for key, value in attrs.items():
if isinstance(value, Field):
value.name = '%s.%s' % (name, key)
fields[key] = value
for base in bases:
if hasattr(base, '_fields'):
fields.update(base._fields)
attrs['_fields'] = fields
return type.__new__(meta, name, bases, attrs)
class Model(object):
__metaclass__ = ModelMetaclass
Whenever you subclass Model and define some Field attributes, they are injected with their names (for more informative error messages, for example), and grouped into a _fields dictionary (for easy iteration, without having to look through all the class attributes and all its base classes' attributes every time):
>>> class A(Model):
... foo = Integer()
...
>>> class B(A):
... bar = String()
...
>>> B._fields
{'foo': Integer('A.foo'), 'bar': String('B.bar')}
Again, this can be done (without inheritance) with a class decorator:
def model(cls):
fields = {}
for key, value in vars(cls).items():
if isinstance(value, Field):
value.name = '%s.%s' % (cls.__name__, key)
fields[key] = value
for base in cls.__bases__:
if hasattr(base, '_fields'):
fields.update(base._fields)
cls._fields = fields
return cls
#model
class A(object):
foo = Integer()
class B(A):
bar = String()
# B.bar has no name :(
# B._fields is {'foo': Integer('A.foo')} :(
Or explicitly:
class A(object):
foo = Integer('A.foo')
_fields = {'foo': foo} # Don't forget all the base classes' fields, too!
Although, on the contrary to your advocacy for readable and maintainable non-meta programming, this is much more cumbersome, redundant and error prone:
class B(A):
bar = String()
# vs.
class B(A):
bar = String('bar')
_fields = {'B.bar': bar, 'A.foo': A.foo}
Having considered the most common and concrete use cases, the only cases where you absolutely HAVE to use metaclasses are when you want to modify the class name or list of base classes, because once defined, these parameters are baked into the class, and no decorator or function can unbake them.
class Metaclass(type):
def __new__(meta, name, bases, attrs):
return type.__new__(meta, 'foo', (int,), attrs)
class Baseclass(object):
__metaclass__ = Metaclass
class A(Baseclass):
pass
class B(A):
pass
print A.__name__ # foo
print B.__name__ # foo
print issubclass(B, A) # False
print issubclass(B, int) # True
This may be useful in frameworks for issuing warnings whenever classes with similar names or incomplete inheritance trees are defined, but I can't think of a reason beside trolling to actually change these values. Maybe David Beazley can.
Anyway, in Python 3, metaclasses also have the __prepare__ method, which lets you evaluate the class body into a mapping other than a dict, thus supporting ordered attributes, overloaded attributes, and other wicked cool stuff:
import collections
class Metaclass(type):
#classmethod
def __prepare__(meta, name, bases, **kwds):
return collections.OrderedDict()
def __new__(meta, name, bases, attrs, **kwds):
print(list(attrs))
# Do more stuff...
class A(metaclass=Metaclass):
x = 1
y = 2
# prints ['x', 'y'] rather than ['y', 'x']
 
class ListDict(dict):
def __setitem__(self, key, value):
self.setdefault(key, []).append(value)
class Metaclass(type):
#classmethod
def __prepare__(meta, name, bases, **kwds):
return ListDict()
def __new__(meta, name, bases, attrs, **kwds):
print(attrs['foo'])
# Do more stuff...
class A(metaclass=Metaclass):
def foo(self):
pass
def foo(self, x):
pass
# prints [<function foo at 0x...>, <function foo at 0x...>] rather than <function foo at 0x...>
You might argue ordered attributes can be achieved with creation counters, and overloading can be simulated with default arguments:
import itertools
class Attribute(object):
_counter = itertools.count()
def __init__(self):
self._count = Attribute._counter.next()
class A(object):
x = Attribute()
y = Attribute()
A._order = sorted([(k, v) for k, v in vars(A).items() if isinstance(v, Attribute)],
key = lambda (k, v): v._count)
 
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=None):
if x is None:
return self._foo0()
else:
return self._foo1(x)
Besides being much more ugly, it's also less flexible: what if you want ordered literal attributes, like integers and strings? What if None is a valid value for x?
Here's a creative way to solve the first problem:
import sys
class Builder(object):
def __call__(self, cls):
cls._order = self.frame.f_code.co_names
return cls
def ordered():
builder = Builder()
def trace(frame, event, arg):
builder.frame = frame
sys.settrace(None)
sys.settrace(trace)
return builder
#ordered()
class A(object):
x = 1
y = 'foo'
print A._order # ['x', 'y']
And here's a creative way to solve the second one:
_undefined = object()
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=_undefined):
if x is _undefined:
return self._foo0()
else:
return self._foo1(x)
But this is much, MUCH voodoo-er than a simple metaclass (especially the first one, which really melts your brain). My point is, you look at metaclasses as unfamiliar and counter-intuitive, but you can also look at them as the next step of evolution in programming languages: you just have to adjust your mindset. After all, you could probably do everything in C, including defining a struct with function pointers and passing it as the first argument to its functions. A person seeing C++ for the first time might say, "what is this magic? Why is the compiler implicitly passing this to methods, but not to regular and static functions? It's better to be explicit and verbose about your arguments". But then, object-oriented programming is much more powerful once you get it; and so is this, uh... quasi-aspect-oriented programming, I guess. And once you understand metaclasses, they're actually very simple, so why not use them when convenient?
And finally, metaclasses are rad, and programming should be fun. Using standard programming constructs and design patterns all the time is boring and uninspiring, and hinders your imagination. Live a little! Here's a metametaclass, just for you.
class MetaMetaclass(type):
def __new__(meta, name, bases, attrs):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, name, bases, attrs)
cls._label = 'Made in %s' % meta.__name__
return cls
attrs['__new__'] = __new__
return type.__new__(meta, name, bases, attrs)
class China(type):
__metaclass__ = MetaMetaclass
class Taiwan(type):
__metaclass__ = MetaMetaclass
class A(object):
__metaclass__ = China
class B(object):
__metaclass__ = Taiwan
print A._label # Made in China
print B._label # Made in Taiwan
Edit
This is a pretty old question, but it's still getting upvotes, so I thought I'd add a link to a more comprehensive answer. If you'd like to read more about metaclasses and their uses, I've just published an article about it here.

The purpose of metaclasses isn't to replace the class/object distinction with metaclass/class - it's to change the behaviour of class definitions (and thus their instances) in some way. Effectively it's to alter the behaviour of the class statement in ways that may be more useful for your particular domain than the default. The things I have used them for are:
Tracking subclasses, usually to register handlers. This is handy when using a plugin style setup, where you wish to register a handler for a particular thing simply by subclassing and setting up a few class attributes. eg. suppose you write a handler for various music formats, where each class implements appropriate methods (play / get tags etc) for its type. Adding a handler for a new type becomes:
class Mp3File(MusicFile):
extensions = ['.mp3'] # Register this type as a handler for mp3 files
...
# Implementation of mp3 methods go here
The metaclass then maintains a dictionary of {'.mp3' : MP3File, ... } etc, and constructs an object of the appropriate type when you request a handler through a factory function.
Changing behaviour. You may want to attach a special meaning to certain attributes, resulting in altered behaviour when they are present. For example, you may want to look for methods with the name _get_foo and _set_foo and transparently convert them to properties. As a real-world example, here's a recipe I wrote to give more C-like struct definitions. The metaclass is used to convert the declared items into a struct format string, handling inheritance etc, and produce a class capable of dealing with it.
For other real-world examples, take a look at various ORMs, like sqlalchemy's ORM or sqlobject. Again, the purpose is to interpret defintions (here SQL column definitions) with a particular meaning.

I have a class that handles non-interactive plotting, as a frontend to Matplotlib. However, on occasion one wants to do interactive plotting. With only a couple functions I found that I was able to increment the figure count, call draw manually, etc, but I needed to do these before and after every plotting call. So to create both an interactive plotting wrapper and an offscreen plotting wrapper, I found it was more efficient to do this via metaclasses, wrapping the appropriate methods, than to do something like:
class PlottingInteractive:
add_slice = wrap_pylab_newplot(add_slice)
This method doesn't keep up with API changes and so on, but one that iterates over the class attributes in __init__ before re-setting the class attributes is more efficient and keeps things up to date:
class _Interactify(type):
def __init__(cls, name, bases, d):
super(_Interactify, cls).__init__(name, bases, d)
for base in bases:
for attrname in dir(base):
if attrname in d: continue # If overridden, don't reset
attr = getattr(cls, attrname)
if type(attr) == types.MethodType:
if attrname.startswith("add_"):
setattr(cls, attrname, wrap_pylab_newplot(attr))
elif attrname.startswith("set_"):
setattr(cls, attrname, wrap_pylab_show(attr))
Of course, there might be better ways to do this, but I've found this to be effective. Of course, this could also be done in __new__ or __init__, but this was the solution I found the most straightforward.

Let's start with Tim Peter's classic quote:
Metaclasses are deeper magic than 99%
of users should ever worry about. If
you wonder whether you need them, you
don't (the people who actually need
them know with certainty that they
need them, and don't need an
explanation about why). Tim Peters
(c.l.p post 2002-12-22)
Having said that, I have (periodically) run across true uses of metaclasses. The one that comes to mind is in Django where all of your models inherit from models.Model. models.Model, in turn, does some serious magic to wrap your DB models with Django's ORM goodness. That magic happens by way of metaclasses. It creates all manner of exception classes, manager classes, etc. etc.
See django/db/models/base.py, class ModelBase() for the beginning of the story.

A reasonable pattern of metaclass use is doing something once when a class is defined rather than repeatedly whenever the same class is instantiated.
When multiple classes share the same special behaviour, repeating __metaclass__=X is obviously better than repeating the special purpose code and/or introducing ad-hoc shared superclasses.
But even with only one special class and no foreseeable extension, __new__ and __init__ of a metaclass are a cleaner way to initialize class variables or other global data than intermixing special-purpose code and normal def and class statements in the class definition body.

Metaclasses can be handy for construction of Domain Specific Languages in Python. Concrete examples are Django, SQLObject 's declarative syntax of database schemata.
A basic example from A Conservative Metaclass by Ian Bicking:
The metaclasses I've used have been
primarily to support a sort of
declarative style of programming. For
instance, consider a validation
schema:
class Registration(schema.Schema):
first_name = validators.String(notEmpty=True)
last_name = validators.String(notEmpty=True)
mi = validators.MaxLength(1)
class Numbers(foreach.ForEach):
class Number(schema.Schema):
type = validators.OneOf(['home', 'work'])
phone_number = validators.PhoneNumber()
Some other techniques: Ingredients for Building a DSL in Python (pdf).
Edit (by Ali): An example of doing this using collections and instances is what I would prefer. The important fact is the instances, which give you more power, and eliminate reason to use metaclasses. Further worth noting that your example uses a mixture of classes and instances, which is surely an indication that you can't just do it all with metaclasses. And creates a truly non-uniform way of doing it.
number_validator = [
v.OneOf('type', ['home', 'work']),
v.PhoneNumber('phone_number'),
]
validators = [
v.String('first_name', notEmpty=True),
v.String('last_name', notEmpty=True),
v.MaxLength('mi', 1),
v.ForEach([number_validator,])
]
It's not perfect, but already there is almost zero magic, no need for metaclasses, and improved uniformity.

I was thinking the same thing just yesterday and completely agree. The complications in the code caused by attempts to make it more declarative generally make the codebase harder to maintain, harder to read and less pythonic in my opinion.
It also normally requires a lot of copy.copy()ing (to maintain inheritance and to copy from class to instance) and means you have to look in many places to see whats going on (always looking from metaclass up) which goes against the python grain also.
I have been picking through formencode and sqlalchemy code to see if such a declarative style was worth it and its clearly not. Such style should be left to descriptors (such as property and methods) and immutable data.
Ruby has better support for such declarative styles and I am glad the core python language is not going down that route.
I can see their use for debugging, add a metaclass to all your base classes to get richer info.
I also see their use only in (very) large projects to get rid of some boilerplate code (but at the loss of clarity). sqlalchemy for example does use them elsewhere, to add a particular custom method to all subclasses based on an attribute value in their class definition
e.g a toy example
class test(baseclass_with_metaclass):
method_maker_value = "hello"
could have a metaclass that generated a method in that class with special properties based on "hello" (say a method that added "hello" to the end of a string). It could be good for maintainability to make sure you did not have to write a method in every subclass you make instead all you have to define is method_maker_value.
The need for this is so rare though and only cuts down on a bit of typing that its not really worth considering unless you have a large enough codebase.

The only legitimate use-case of a metaclass is to keep other nosy developers from touching your code. Once a nosy developer masters metaclasses and starts poking around with yours, throw in another level or two to keep them out. If that doesn't work, start using type.__new__ or perhaps some scheme using a recursive metaclass.
(written tongue in cheek, but I've seen this kind of obfuscation done. Django is a perfect example)

The only time I used metaclasses in Python was when writing a wrapper for the Flickr API.
My goal was to scrape flickr's api site and dynamically generate a complete class hierarchy to allow API access using Python objects:
# Both the photo type and the flickr.photos.search API method
# are generated at "run-time"
for photo in flickr.photos.search(text=balloons):
print photo.description
So in that example, because I generated the entire Python Flickr API from the website, I really don't know the class definitions at runtime. Being able to dynamically generate types was very useful.

Metaclasses aren't replacing programming! They're just a trick which can automate or make more elegant some tasks. A good example of this is Pygments syntax highlighting library. It has a class called RegexLexer which lets the user define a set of lexing rules as regular expressions on a class. A metaclass is used to turn the definitions into a useful parser.
They're like salt; it's easy to use too much.

Some GUI libraries have trouble when multiple threads try to interact with them. tkinter is one such example; and while one can explicitly handle the problem with events and queues, it can be far simpler to use the library in a manner that ignores the problem altogether. Behold -- the magic of metaclasses.
Being able to dynamically rewrite an entire library seamlessly so that it works properly as expected in a multithreaded application can be extremely helpful in some circumstances. The safetkinter module does that with the help of a metaclass provided by the threadbox module -- events and queues not needed.
One neat aspect of threadbox is that it does not care what class it clones. It provides an example of how all base classes can be touched by a metaclass if needed. A further benefit that comes with metaclasses is that they run on inheriting classes as well. Programs that write themselves -- why not?

You never absolutely need to use a metaclass, since you can always construct a class that does what you want using inheritance or aggregation of the class you want to modify.
That said, it can be very handy in Smalltalk and Ruby to be able to modify an existing class, but Python doesn't like to do that directly.
There's an excellent DeveloperWorks article on metaclassing in Python that might help. The Wikipedia article is also pretty good.

The way I used metaclasses was to provide some attributes to classes. Take for example:
class NameClass(type):
def __init__(cls, *args, **kwargs):
type.__init__(cls, *args, **kwargs)
cls.name = cls.__name__
will put the name attribute on every class that will have the metaclass set to point to NameClass.

This is a minor use, but... one thing I've found metaclasses useful for is to invoke a function whenever a subclass is created. I codified this into a metaclass which looks for an __initsubclass__ attribute: whenever a subclass is created, all parent classes which define that method are invoked with __initsubclass__(cls, subcls). This allows creation of a parent class which then registers all subclasses with some global registry, runs invariant checks on subclasses whenever they are defined, perform late-binding operations, etc... all without have to manually call functions or to create custom metaclasses that perform each of these separate duties.
Mind you, I've slowly come to realize the implicit magicalness of this behavior is somewhat undesirable, since it's unexpected if looking at a class definition out of context... and so I've moved away from using that solution for anything serious besides initializing a __super attribute for each class and instance.

I recently had to use a metaclass to help declaratively define an SQLAlchemy model around a database table populated with U.S. Census data from http://census.ire.org/data/bulkdata.html
IRE provides database shells for the census data tables, which create integer columns following a naming convention from the Census Bureau of p012015, p012016, p012017, etc.
I wanted to a) be able to access these columns using a model_instance.p012017 syntax, b) be fairly explicit about what I was doing and c) not have to explicitly define dozens of fields on the model, so I subclassed SQLAlchemy's DeclarativeMeta to iterate through a range of the columns and automatically create model fields corresponding to the columns:
from sqlalchemy.ext.declarative.api import DeclarativeMeta
class CensusTableMeta(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
table = 'p012'
for i in range(1, 49):
fname = "%s%03d" % (table, i)
dict_[fname] = Column(Integer)
setattr(cls, fname, dict_[fname])
super(CensusTableMeta, cls).__init__(classname, bases, dict_)
I could then use this metaclass for my model definition and access the automatically enumerated fields on the model:
CensusTableBase = declarative_base(metaclass=CensusTableMeta)
class P12Tract(CensusTableBase):
__tablename__ = 'ire_p12'
geoid = Column(String(12), primary_key=True)
#property
def male_under_5(self):
return self.p012003
...

There seems to be a legitimate use described here - Rewriting Python Docstrings with a Metaclass.

Pydantic is a library for data validation and settings management that enforces type hints at runtime and provides user friendly errors when data is invalid. It makes use of metaclasses for its BaseModel and for number range validation.
At work I encountered some code that had a process that had several stages defined by classes. The ordering of these steps was controlled by metaclasses that added the steps to a list as the classes were defined. This was thrown out and the order was set by adding them to a list.

I had to use them once for a binary parser to make it easier to use. You define a message class with attributes of the fields present on the wire.
They needed to be ordered in the way they were declared to construct the final wire format from it. You can do that with metaclasses, if you use an ordered namespace dict. In fact, its in the examples for Metaclasses:
https://docs.python.org/3/reference/datamodel.html#metaclass-example
But in general: Very carefully evaluate, if you really really need the added complexity of metaclasses.

the answer from #Dan Gittik is cool
the examples at the end could clarify many things,I changed it to python 3 and give some explanation:
class MetaMetaclass(type):
def __new__(meta, name, bases, attrs):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, name, bases, attrs)
cls._label = 'Made in %s' % meta.__name__
return cls
attrs['__new__'] = __new__
return type.__new__(meta, name, bases, attrs)
#China is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class China(MetaMetaclass, metaclass=MetaMetaclass):
__metaclass__ = MetaMetaclass
#Taiwan is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class Taiwan(MetaMetaclass, metaclass=MetaMetaclass):
__metaclass__ = MetaMetaclass
#A is a normal class and it's __new__ method would be changed by China(metaclass)
class A(metaclass=China):
__metaclass__ = China
#B is a normal class and it's __new__ method would be changed by Taiwan(metaclass)
class B(metaclass=Taiwan):
__metaclass__ = Taiwan
print(A._label) # Made in China
print(B._label) # Made in Taiwan
everything is object,so class is object
class object is created by metaclass
all class inheritted from type is metaclass
metaclass could control class creating
metaclass could control metaclass creating too(so it could loop for ever)
this's metaprograming...you could control the type system at running time
again,everything is object,this's a uniform system,type create type,and type create instance

Another use case is when you want to be able to modify class-level attributes and be sure that it only affects the object at hand. In practice, this implies "merging" the phases of metaclasses and classes instantiations, thus leading you to deal only with class instances of their own (unique) kind.
I also had to do that when (for concerns of readibility and polymorphism) we wanted to dynamically define propertys which returned values (may) result from calculations based on (often changing) instance-level attributes, which can only be done at the class level, i.e. after the metaclass instantiation and before the class instantiation.

I know this is an old question But here is a use case that is really invaluable if wanting to create only a single instance of a class based on the parameters passed to the constructor.
Instance singletons
I use this code for creating a singleton instance of a device on a Z-Wave network. No matter how many times I create an instance if the same values are passed to the constructor if an instance with the exact same values exists then that is what gets returned.
import inspect
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
cls._instances[key] = (
super(SingletonMeta, cls).__call__(*args, **kwargs)
)
return cls._instances[key]
class Test1(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
class Test2(metaclass=SingletonMeta):
def __init__(self, param3='test1', param4='test2'):
pass
test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')
test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3
Now someone might say it can be done without the use of a metaclass and I know it can be done if the __init__ method is decorated. I do not know of another way to do it. The code below while it will return a similiar instance that contains all of the same data it is not a singleton instance, a new instance gets created. Because it creates a new instance with the same data there wuld need to be additional steps taken to check equality of instances. I n the end it consumes more memory then using a metaclass and with the meta class no additional steps need to be taken to check equality.
class Singleton(object):
_instances = {}
def __init__(self, param1, param2='test'):
key = (param1, param2)
if key in self._instances:
self.__dict__.update(self._instances[key].__dict__)
else:
self.param1 = param1
self.param2 = param2
self._instances[key] = self
test1 = Singleton('test1', 'test2')
test2 = Singleton('test')
test3 = Singleton('test', 'test')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test1 params', test1.param1, test1.param2)
print('test2 params', test2.param1, test2.param2)
print('test3 params', test3.param1, test3.param2)
print('number of Singleton instances:', len(Singleton._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: False
test1 params test1 test2
test2 params test test
test3 params test test
number of Singleton instances: 2
The metaclass approach is really nice to use if needing to check for the removal or addition of a new instance as well.
import inspect
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
cls._instances[key] = (
super(SingletonMeta, cls).__call__(*args, **kwargs)
)
return cls._instances[key]
class Test(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
instances = []
instances.append(Test('test1', 'test2'))
instances.append(Test('test1', 'test'))
print('number of instances:', len(instances))
instance = Test('test2', 'test3')
if instance not in instances:
instances.append(instance)
instance = Test('test1', 'test2')
if instance not in instances:
instances.append(instance)
print('number of instances:', len(instances))
output
number of instances: 2
number of instances: 3
Here is a way to remove an instance that has been created after the instance is no longer in use.
import inspect
import weakref
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
def remove_instance(c, ref):
for k, v in list(c._instances.items())[:]:
if v == ref:
del cls._instances[k]
break
cls.remove_instance = classmethod(remove_instance)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
instance = super(SingletonMeta, cls).__call__(*args, **kwargs)
cls._instances[key] = weakref.ref(
instance,
instance.remove_instance
)
return cls._instances[key]()
class Test1(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
class Test2(metaclass=SingletonMeta):
def __init__(self, param3='test1', param4='test2'):
pass
test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')
test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
print()
del test1
del test5
del test6
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3
number of Test1 instances: 2
number of Test2 instances: 1
if you look at the output you will notice that the number of Test1 instances has not changed. That is because test1 and test3 are the same instance and I only deleted test1 so there is still a reference to the test1 instance in the code and as a result of that the test1 instance does not get removed.
Another nice feature of this is if the instance uses only the supplied parameters to do whatever it is tasked to do then you can use the metaclass to facilitate remote creations of the instance either on a different computer entirely or in a different process on the same machine. the parameters can simply be passed over a socket or a named pipe and a replica of the class can be created on the receiving end.

Related

Is it possible to generate getters and setters functions based on some sort of attribute automatically?

I have a "user" class that has hundreds of attributes and I'm not sure if it is best to use a class and have to manually create hundreds of getters and setters which just set values to a key in a dict or just use the dictionary directly. Having the getters and setter seems to be the best way to abstract the internal representations of the attributes so I can change from a dict to whatever I need to later, I just don't want to expose those keys in case they need to change.
For example we have a few attributes in class foo:
class User(object):
attributes = {"fname": None,"lname": None,}
I would like something to let you call get_fname() and have it return the value of fname
or get_lname() and have it set the value of lname without having to define the functions.
Same way with the setters, set_fname(value) would set fnames value and so on. I'm not sure if this is even possible or if I'm just overthinking the problem and missing a simple solution.
You could simply define all of the attributes in the class. Each instance will appear to have all of the class variables as members (at the default value), and if a value is set for an attribute on any particular instance it will update the instance __dict__ rather than the class __dict__.
class User(object):
fname = ''
mname = ''
lname = ''
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
user1 = User(fname='Foo', lname='Bar')
user2 = User(lname='Public', fname='John')
user2.mname = 'Q'
print(f"{user1.fname} {user1.mname} {user1.lname}") # prints "Foo Bar"
print(f"{user2.fname} {user2.mname} {user2.lname}") # prints "John Q Public"
As other comments have said, having so many variables in a single object is something that is best avoided if possible, and having explicit getters & setters is un-Pythonic.
If you need getters and/or setters, you can always add them later if & when you need them; Java won't let you do this, but Python will.
For this particular case, you can simply use the dict to initialize a bunch of variables, assuming you have some situation like
class User:
attribute_init = {"fname": None, "lname": None, "mi":""} # and many more
def __init__(self, fname, lname):
self.__dict__.upate(__class__.attribute_init)
self.fname = fname
self.lname = lname
user = User('Foo', 'Bar')
print(f"{user.fname} {user.mi} {user.l}")
Although, I would heed the advice in the comments about this being major code smell. Number of attributes and this dict-based
Although it's usually not necessary in Python to have getter and setters, if you're going to implement them anyway, you could minimize a lot of the repetitive code by using a class decorator as shown below:
def add_attributes(attributes):
def class_decorator(cls):
''' Create getter and setter methods for each of the given attributes. '''
for attr_name, inital_value in attributes.items():
def getter(self, name=attr_name):
return getattr(self, name)
def setter(self, value, name=attr_name):
setattr(self, name, value)
setattr(cls, attr_name, inital_value)
setattr(cls, 'get_' + attr_name, getter)
setattr(cls, 'set_' + attr_name, setter)
return cls
return class_decorator
#add_attributes({"fname": None, "lname": None,})
class User(object):
pass
user = User()
user.set_fname('Lawrence')
user.set_lname('Dimmick')
print(f'{user.get_fname()!r}') # -> 'Lawrence'
print(f'{user.get_lname()!r}') # -> 'Dimmick'

Augmenting class attribute when inheriting in Python

I have a chain of inheritance in Python, and I want each child class to be able to add on new custom parameters. Right now I'm doing this:
class A(object):
PARAM_NAMES = ['blah1']
...
class B(A):
PARAM_NAMES = A.PARAM_NAMES + ['blah2']
...
I'm wondering if there's a slicker method, though, without referencing A twice? Can't use super() because it's not within a method definition, afaik. I suppose I could use a class method, but that'd be annoying (since I really would want a property).
What's the right way to do this?
of coarse there is always black magic you can do ... but the question is just because you can ... should you?
class MyMeta(type):
items = []
def __new__(meta, name, bases, dct):
return super(MyMeta, meta).__new__(meta, name, bases, dct)
def __init__(cls, name, bases, dct):
MyMeta.items.extend(cls.items)
cls.items = MyMeta.items[:]
super(MyMeta, cls).__init__(name, bases, dct)
class MyKlass(object):
__metaclass__ = MyMeta
class A(MyKlass):
items=["a","b","c"]
class B(A):
items=["1","2","3"]
print A.items
print B.items
since this creates a copy it will not suffer from the same problem as the other solution
(please note that I dont really recommend doing this ... its just to show you can)
This may or may not be smart, but it's technically possible to use a metaclass for this. Unlike Joran's method, I use a property, so that it retains full dynamic nature (that is, if you modify any class's private _PARAM_NAMES list after defining the class, the corresponding PARAM_NAME property of every other derived class reflects that change). For this reason I put an add_param method on the base class.
Python 3 is assumed here, and the PARAM_NAMES property returns a set to avoid duplicate items.
class ParamNameBuilderMeta(type):
def __new__(mcl, name, bases, dct):
names = dct.get("PARAM_NAMES", [])
names = {names} if isinstance(names, str) else set(names)
dct["_PARAM_NAMES"] = names
dct["PARAM_NAMES"] = property(lambda s: type(s).PARAM_NAMES)
return super().__new__(mcl, name, bases, dct)
#property
def PARAM_NAMES(cls):
# collect unique list items ONLY from our classes in the MRO
return set().union(*(c._PARAM_NAMES for c in reversed(cls.__mro__)
if isinstance(c, ParamNameBuilderMeta)))
Usage:
class ParamNameBuilderBase(metaclass=ParamNameBuilderMeta):
#classmethod
def add_param(self, param_name):
self._PARAM_NAMES.add(param_name)
class A(ParamNameBuilderBase):
PARAM_NAMES = 'blah1'
class B(A):
PARAM_NAMES = 'blah1', 'blah2'
class C(B):
pass
Check to make sure it works on both classes and instances thereof:
assert C.PARAM_NAMES == {'blah1', 'blah2'}
assert C().PARAM_NAMES == {'blah1', 'blah2'}
Check to make sure it's still dynamic:
C.add_param('blah3')
assert C.PARAM_NAMES == {'blah1', 'blah2', 'blah3'}
The behavior you've described is actually quite specific. You've said that you
want each child class to be able to add on new custom paramters
But the way you've implemented it, this will result in unpredictable behaviour. Consider:
class A(object):
PARAM_NAMES = ['blah1']
class B(A):
PARAM_NAMES = A.PARAM_NAMES + ['blah2']
class C(A):pass
print(A.PARAM_NAMES)
print(B.PARAM_NAMES)
print(C.PARAM_NAMES)
A.PARAM_NAMES.append('oops')
print(C.PARAM_NAMES)
What we notice is that the classes that choose to add new parameters have a new reference to the parameter list, while ones that do not add new parameters have the same reference as their parent. Unless carefully controlled, this is unsafe behaviour.
It is more reliable to only use constants as class properties, or to redefine the list entirely each time (make it a tuple), which is not "slicker". Otherwise, I'd reccomend class methods, as you suggest, and making the property an instance variable

Attribute mapping with a Python property

Is there a way to make a Python #property act as a setter and getter all at once?
I feel like I've seen this somewhere before but can't remember and can't recreate the solution myself.
For example, instead of:
class A(object):
def __init__(self, b): self.b = b
def get_c(self): return self.b.c
def set_c(self, value): self.b.c = value
c = property(get_c, set_c)
we could somehow signal that for A objects, the c attribute is really equivalent to b.c for getter, setter (and deleter if we like).
Motivation:
This would be particularly useful when we need A to be a proxy wrapper around B objects (of which b is an instance) but share only the data attributes and no methods. Properties such as these would allow the A and B objects' data to stay completely in sync while both are used by the same code.
I think you are looking for this forwardTo class as posted on ActiveState.
This recipe lets you transparently forward attribute access to another
object in your class. This way, you can expose functionality from some
member of your class instance directly, e.g. foo.baz() instead of
foo.bar.baz().
class forwardTo(object):
"""
A descriptor based recipe that makes it possible to write shorthands
that forward attribute access from one object onto another.
>>> class C(object):
... def __init__(self):
... class CC(object):
... def xx(self, extra):
... return 100 + extra
... foo = 42
... self.cc = CC()
...
... localcc = forwardTo('cc', 'xx')
... localfoo = forwardTo('cc', 'foo')
...
>>> print C().localcc(10)
110
>>> print C().localfoo
42
Arguments: objectName - name of the attribute containing the second object.
attrName - name of the attribute in the second object.
Returns: An object that will forward any calls as described above.
"""
def __init__(self, objectName, attrName):
self.objectName = objectName
self.attrName = attrName
def __get__(self, instance, owner=None):
return getattr(getattr(instance, self.objectName), self.attrName)
def __set__(self, instance, value):
setattr(getattr(instance, self.objectName), self.attrName, value)
def __delete__(self, instance):
delattr(getattr(instance, self.objectName), self.attrName)
For a more robust code, you may want to consider replacing getattr(instance, self.objectName) with operator.attrgetter(self.objectName)(instance). This would allow objectName to be a dotted name (e.g., so you could have A.c be a proxy for A.x.y.z.d).
If you're trying to delegate a whole slew of properties from any A object to its b member, it's probably easier to do that inside __getattr__, __setattr__, and __delattr__, e.g.:
class A(object):
delegated = ['c', 'd', 'e', 'f']
def __getattr__(self, attr):
if attr in A.delegated:
return getattr(self.b, attr)
raise AttributeError()
I haven't shown the __setattr__ and __delattr__ definitions here, for brevity, and to avoid having to explain the difference between __getattr__ and __getattribute__. See the docs if you need more information.
This is readily extensible to classes that want to proxy different attributes to different members:
class A(object):
b_delegated = ['c', 'd', 'e', 'f']
x_delegated = ['y', 'z']
def __getattr__(self, attr):
if attr in A.b_delegated:
return getattr(self.b, attr)
elif attr in A.x_delegated:
return getattr(self.x, attr)
else:
raise AttributeError()
If you need to delegate all attributes, dynamically, that's almost as easy. You just get a list of self.b's attributes (or self.b.__class__'s) at init time or at call time (which of the four possibilities depends on exactly what you want to do), and use that in place of the static list b_delegated.
You can of course filter this by name (e.g., to remove _private methods), or by type, or any arbitrary predicate (e.g., to remove any callable attributes).
Or combine any of the above.
At any rate, this is the idiomatic way to do (especially dynamic) proxying in Python. It's not perfect, but trying to invent a different mechanism is probably not a good idea.
And in fact, it's not really meant to be perfect. This is something you shouldn't be doing too often, and shouldn't be trying to disguise when you do it. It's obvious that a ctypes.cdll or a pyobjc module is actually delegating to something else, because it's actually useful for the user to know that. If you really need to delegate most of the public interface of one class to another, and don't want the user to know about the delegation… maybe you don't need it. Maybe it's better to just expose the private object directly, or reorganize your object model so the user is interacting with the right things in the first place.
There's the decorator syntax for creating properties, then there are full blown custom-defined descriptors, but since the setter/getter pseudo-private pattern is actively discouraged in Python and the Python community, there isn't really a widely distributed or commonly used way to do what you are looking for.
For proxy objects, you can use __getattr__, __setattr__, and __getattribute__, or try to manipulate things earlier in the process by fooling around with __new__ or a metaclass.
def make_property(parent, attr):
def get(self):
return getattr(getattr(self, parent), attr)
def set(self, value):
setattr(getattr(self, parent), attr, value)
return property(get, set)
class A(object):
def __init__(self, b): self.b = b
c = make_property('b', 'c')
Here's another way of doing it, statically forwarding properties from one object to another, but with economy.
It allows to forward get/set property in two lines, and aread-only property in one line, making use of dynamic property definition at the class level and lambdas.
class A:
"""Classic definition of property, with decorator"""
_id = ""
_answer = 42
#property
def id(self):
return self._id
#id.setter
def id(self, value):
self._id = value
#property
def what(self):
return self._answer
class B:
obj = A()
# Forward "id" from self.obj
id = property(lambda self: self.obj.id,
lambda self, value: setattr(self.obj, "id", value))
# Forward read-only property from self.obj
what = property(lambda self: self.obj.what)

Python Classes: turn all inherited methods private

Class Bar inherits from Foo:
class Foo(object):
def foo_meth_1(self):
return 'foometh1'
def foo_meth_2(self):
return 'foometh2'
class Bar(Foo):
def bar_meth(self):
return 'bar_meth'
Is there a way of turning all methods inherited from Foo private?
class Bar(Foo):
def bar_meth(self):
return 'bar_meth'
def __foo_meth_1(self):
return 'foometh1'
def __foo_meth_2(self):
return 'foometh2'
Python doesn't have privates, only obfuscated method names. But I suppose you could iterate over the methods of the superclass when creating the instance, removing them from yourself and creating new obfuscatingly named method names for those functions. setattr and getattr could be useful if you use a function to create obfuscated names.
With that said, it's a pretty cthuhlu-oid thing to do. You mention the intent is to keep the namespace cleaner, but this is more like mixing ammonia and chlorine. If the method needs to be hidden, hide it in the superclass. The don't create instances of the superclass -- instead create a specific class that wraps the hidden methods in public ones, which you could name the same thing but strip the leading whitespace.
Assuming I understand your intent correctly, I would suggest doing something like this:
class BaseFoo(object):
def __init__(self):
raise NotImplementedError('No instances of BaseFoo please.')
def _foo(self):
return 'Foo.'
def _bar(self):
return 'Bar.'
class HiddenFoo(BaseFoo):
def __init__(self): pass
class PublicFoo(BaseFoo):
def __init__(self): pass
foo = BaseFoo._foo
bar = BaseFoo._bar
def try_foobar(instance):
print 'Trying ' + instance.__class__.__name__
try:
print 'foo: ' + instance.foo
print 'bar: ' + instance.bar
except AttributeError, e:
print e
foo_1 = HiddenFoo()
foo_2 = PublicFoo()
try_foobar(foo_1)
try_foobar(foo_2)
And if PublicFoo.foo would do something more than BaseFoo.foo, you would write a wrapper that does whatever is needed, and then calls foo from the superclass.
This is only possible with Pyhtons's metaclasses. But this is quite sophisticated and I am not sure if it is worth the effort. For details have a look here
Why would you like to do so?
Since foo() and __foo() are completely different methods with no link between them, Python is unable to understand what you want to do. So you have to explain to it step by step, meaning (like sapth said) to remove the old methods and add new ones.
This is an Object Oriented Design flaw and a better approach would be through delegation:
class Basic:
def meth_1(self):
return 'meth1'
def meth_2(self):
return 'meth2'
class Foo(Basic):
# Nothing to do here
pass
class Bar:
def __init__(self):
self.dg = Basic()
def bar_meth(self):
return 'bar_meth ' + self.__meth_1()
def __meth_1(self):
return self.dg.meth_1()
def __meth_2(self):
return self.dg.meth_2()
While Foo inherits the Basic class because he wants the public methods from him, Bar will only delegate the job to Basic because he doesn't want to integrate Basic's interface into its own interface.
You can use metaclasses, but Boo will no longer be an actual subclass of Foo, unless you want Foo's methods to be both 'private' and 'public' in instances of Bar (you cannot selectively inherit names or delattr members inherited from parent classes). Here is a very contrived example:
from inspect import getmembers, isfunction
class TurnPrivateMetaclass(type):
def __new__(cls, name, bases, d):
private = {'__%s' % i:j for i,j in getmembers(bases[0]) if isfunction(j)}
d.update(private)
return type.__new__(cls, name, (), d)
class Foo:
def foo_meth_1(self): return 'foometh1'
def foo_meth_2(self): return 'foometh2'
class Bar(Foo, metaclass=TurnPrivateMetaclass):
def bar_meth(self): return 'bar_meth'
b = Bar()
assert b.__foo_meth_1() == 'foometh1'
assert b.__foo_meth_2() == 'foometh2'
assert b.bar_meth() == 'bar_meth
If you wanted to get attribute access working, you could create a new Foo base class in __new__ with all renamed methods removed.

Polluting a class's environment

I have an object that holds lots of ids that are accessed statically. I want to split that up into another object which holds only those ids without the need of making modifications to the already existen code base. Take for example:
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car(object):
types = _CarType
I want to be able to access _CarType.DIESEL_CAR_ENGINE either by calling Car.types.DIESEL_CAR_ENGINE, either by Car.DIESEL_CAR_ENGINE for backwards compatibility with the existent code. It's clear that I cannot use __getattr__ so I am trying to find a way of making this work (maybe metaclasses ? )
Although this is not exactly what subclassing is made for, it accomplishes what you describe:
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car(_CarType):
types = _CarType
Something like:
class Car(object):
for attr, value in _CarType.__dict__.items():
it not attr.startswith('_'):
locals()[attr] = value
del attr, value
Or you can do it out of the class declaration:
class Car(object):
# snip
for attr, value in _CarType.__dict__.items():
it not attr.startswith('_'):
setattr(Car, attr, value)
del attr, value
This is how you could do this with a metaclass:
class _CarType(type):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
def __init__(self,name,bases,dct):
for key in dir(_CarType):
if key.isupper():
setattr(self,key,getattr(_CarType,key))
class Car(object):
__metaclass__=_CarType
print(Car.DIESEL_CAR_ENGINE)
print(Car.GAS_CAR_ENGINE)
Your options fall into two substantial categories: you either copy the attributes from _CarType into Car, or set Car's metaclass to a custom one with a __getattr__ method that delegates to _CarType (so it isn't exactly true that you can't use __getattr__: you can, you just need to put in in Car's metaclass rather than in Car itself;-).
The second choice has implications that you might find peculiar (unless they are specifically desired): the attributes don't show up on dir(Car), and they can't be accessed on an instance of Car, only on Car itself. I.e.:
>>> class MetaGetattr(type):
... def __getattr__(cls, nm):
... return getattr(cls.types, nm)
...
>>> class Car:
... __metaclass__ = MetaGetattr
... types = _CarType
...
>>> Car.GAS_CAR_ENGINE
1
>>> Car().GAS_CAR_ENGINE
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Car' object has no attribute 'GAS_CAR_ENGINE'
You could fix the "not from an instance" issue by also adding a __getattr__ to Car:
>>> class Car:
... __metaclass__ = MetaGetattr
... types = _CarType
... def __getattr__(self, nm):
... return getattr(self.types, nm)
...
to make both kinds of lookup work, as is probably expected:
>>> Car.GAS_CAR_ENGINE
1
>>> Car().GAS_CAR_ENGINE
1
However, defining two, essentially-equal __getattr__s, doesn't seem elegant.
So I suspect that the simpler approach, "copy all attributes", is preferable. In Python 2.6 or better, this is an obvious candidate for a class decorator:
def typesfrom(typesclass):
def decorate(cls):
cls.types = typesclass
for n in dir(typesclass):
if n[0] == '_': continue
v = getattr(typesclass, n)
setattr(cls, n, v)
return cls
return decorate
#typesfrom(_CarType)
class Car(object):
pass
In general, it's worth defining a decorator if you're using it more than once; if you only need to perform this task for one class ever, then expanding the code inline instead (after the class statement) may be better.
If you're stuck with Python 2.5 (or even 2.4), you can still define typesfrom the same way, you just apply it in a slightly less elegant matter, i.e., the Car definition becomes:
class Car(object):
pass
Car = typesfrom(_CarType)(Car)
Do remember decorator syntax (introduced in 2.2 for functions, in 2.6 for classes) is just a handy way to wrap these important and frequently recurring semantics.
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car:
types = _CarType
def __getattr__(self, name):
return getattr(self.types, name)
If an attribute of an object is not found, and it defines that magic method __getattr__, that gets called to try to find it.
Only works on a Car instance, not on the class.

Categories