Extending frozen dataclass and take all data from base class instance - python

Suppose we have a class coming from a library,
#dataclass(frozen=True)
class Dog:
name: str
blabla : int
# lot of parameters
# ...
whatever: InitVar[Sequence[str]]
I have a dog constructor coming from an external library.
pluto = dog_factory() # returns a Dog object
I would like this dog to have a new member, let's say 'bite'.
Obviously pluto['bite'] = True will fail, since dataclass is frozen.
So my idea is to make a subclass from Dog and get all the data from the 'pluto' instance.
class AngryDog(Dog):
# what will come here ?
Is there a way to avoid manually put all the class Dog parameters in init ?
Something like a copy constructor.
ideally:
class AngryDog(Dog):
def __init__(self, dog, bite = True):
copy_construct(dog)

If you want to use inheritance to solve your problem, you need to start off with writing a proper AngryDog subclass that you can use to build sane instances from.
The next step would be to add a from_dog classmethod, something like this maybe:
from dataclasses import dataclass, asdict
#dataclass(frozen=True)
class AngryDog(Dog):
bite: bool = True
#classmethod
def from_dog(cls, dog: Dog, **kwargs):
return cls(**asdict(dog), **kwargs)
But following this pattern, you'll face a specific edge case, which you yourself already pointed out through the whatever parameter. When re-calling the Dog constructor, any InitVar will be missing in an asdict call, since they are not a proper member of the class. In fact, anything that takes place in a dataclass' __post_init__, which is where InitVars go, might lead to bugs or unexpected behavior.
If it's only minor stuff like filtering or deleting known parameters from the cls call and the parent class is not expected to change, you can just try to handle it in from_dog. But there is conceptually no way to provide a general solution for this kind of from_instance problem.
Composition would work bug-free from a data-integrity perspective, but might be unidiomatic or clunky given the exact matter at hand. Such a dog-extension wouldn't be usable in-place of a proper dog-instance, but we could duck-type it into the right shape in case it's necessary:
class AngryDogExtension:
def __init__(self, dog, bite=True):
self.dog = dog
self.bite = bite
def __getattr__(self, item):
"""Will make instances of this class bark like a dog."""
return getattr(self.dog, item)
Usage:
# starting with a basic dog instance
>>> dog = Dog(name='pluto', blabla=1, whatever=['a', 'b'])
>>> dog_e = AngryDogExtension(d)
>>> dog_e.bite # no surprise here, just a regular member
True
>>> dog_e.name # this class proxies its dog member, so no need to run `dog_e.dog.name`
pluto
But ultimately, the point remains that isinstance(dog_e, Dog) will return False. If you're committed to make that call return True, there is some advanced trickery to help you out, and make anyone who inherits your code hate you:
class AngryDogDoppelganger(Dog):
def __init__(self, bite, **kwargs):
if "__dog" in kwargs:
object.__setattr__(self, "__dog", kwargs["__dog"])
else:
object.__setattr__(self, "__dog", Dog(**kwargs))
object.__setattr__(self, "bite", bite)
#classmethod
def from_dog(cls, dog, bite=True):
return cls(bite, __dog=dog)
def __getattribute__(self, name):
"""Will make instances of this class bark like a dog.
Can't use __getattr__, since it will see its own instance
attributes. To have __dog work as a proxy, it needs to be
checked before basic attribute lookup.
"""
try:
return getattr(object.__getattribute__(self, "__dog"), name)
except AttributeError:
pass
return object.__getattribute__(self, name)
Usage:
# starting with a basic dog instance
>>> dog = Dog(name='pluto', blabla=1, whatever=['a', 'b'])
# the doppelganger offers a from_instance method, as well as
# a constructor that works as expected of a subclass
>>> angry_1 = AngryDogDoppelganger.from_dog(dog)
>>> angry_2 = AngryDogDoppelganger(name='pluto', blabla=1, whatever=['a', 'b'], bite=True)
# instances also bark like at dog, and now even think they're a dog
>>> angry_1.bite # from subclass
True
>>> angry_1.name # looks like inherited from parent class, is actually proxied from __dog
pluto
>>> isinstance(angry_1, Dog) # 🎉
True
Most of the dataclass-added methods, like __repr__, will be broken though, including plugging doppelganger instances in things like dataclass.asdict or even just vars - so use at own risk.

Related

How to overwrite self after reading yaml? [duplicate]

I would like to replace an object instance by another instance inside a method like this:
class A:
def method1(self):
self = func(self)
The object is retrieved from a database.
It is unlikely that replacing the 'self' variable will accomplish whatever you're trying to do, that couldn't just be accomplished by storing the result of func(self) in a different variable. 'self' is effectively a local variable only defined for the duration of the method call, used to pass in the instance of the class which is being operated upon. Replacing self will not actually replace references to the original instance of the class held by other objects, nor will it create a lasting reference to the new instance which was assigned to it.
As far as I understand, If you are trying to replace the current object with another object of same type (assuming func won't change the object type) from an member function. I think this will achieve that:
class A:
def method1(self):
newObj = func(self)
self.__dict__.update(newObj.__dict__)
It is not a direct answer to the question, but in the posts below there's a solution for what amirouche tried to do:
Python object conversion
Can I dynamically convert an instance of one class to another?
And here's working code sample (Python 3.2.5).
class Men:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a men! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_men(self):
print('I made The Matrix')
class Women:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a women! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_women(self):
print('I made Cloud Atlas')
men = Men('Larry')
men.who_are_you()
#>>> I'm a men! My name is Larry
men.method_unique_to_men()
#>>> I made The Matrix
men.cast_to(Women, 'Lana')
men.who_are_you()
#>>> I'm a women! My name is Lana
men.method_unique_to_women()
#>>> I made Cloud Atlas
Note the self.__class__ and not self.__class__.__name__. I.e. this technique not only replaces class name, but actually converts an instance of a class (at least both of them have same id()). Also, 1) I don't know whether it is "safe to replace a self object by another object of the same type in [an object own] method"; 2) it works with different types of objects, not only with ones that are of the same type; 3) it works not exactly like amirouche wanted: you can't init class like Class(args), only Class() (I'm not a pro and can't answer why it's like this).
Yes, all that will happen is that you won't be able to reference the current instance of your class A (unless you set another variable to self before you change it.) I wouldn't recommend it though, it makes for less readable code.
Note that you're only changing a variable, just like any other. Doing self = 123 is the same as doing abc = 123. self is only a reference to the current instance within the method. You can't change your instance by setting self.
What func(self) should do is to change the variables of your instance:
def func(obj):
obj.var_a = 123
obj.var_b = 'abc'
Then do this:
class A:
def method1(self):
func(self) # No need to assign self here
In many cases, a good way to achieve what you want is to call __init__ again. For example:
class MyList(list):
def trim(self,n):
self.__init__(self[:-n])
x = MyList([1,2,3,4])
x.trim(2)
assert type(x) == MyList
assert x == [1,2]
Note that this comes with a few assumptions such as the all that you want to change about the object being set in __init__. Also beware that this could cause problems with inheriting classes that redefine __init__ in an incompatible manner.
Yes, there is nothing wrong with this. Haters gonna hate. (Looking at you Pycharm with your in most cases imaginable, there's no point in such reassignment and it indicates an error).
A situation where you could do this is:
some_method(self, ...):
...
if(some_condition):
self = self.some_other_method()
...
return ...
Sure, you could start the method body by reassigning self to some other variable, but if you wouldn't normally do that with other parametres, why do it with self?
One can use the self assignment in a method, to change the class of instance to a derived class.
Of course one could assign it to a new object, but then the use of the new object ripples through the rest of code in the method. Reassiging it to self, leaves the rest of the method untouched.
class aclass:
def methodA(self):
...
if condition:
self = replace_by_derived(self)
# self is now referencing to an instance of a derived class
# with probably the same values for its data attributes
# all code here remains untouched
...
self.methodB() # calls the methodB of derivedclass is condition is True
...
def methodB(self):
# methodB of class aclass
...
class derivedclass(aclass):
def methodB(self):
#methodB of class derivedclass
...
But apart from such a special use case, I don't see any advantages to replace self.
You can make the instance a singleton element of the class
and mark the methods with #classmethod.
from enum import IntEnum
from collections import namedtuple
class kind(IntEnum):
circle = 1
square = 2
def attr(y): return [getattr(y, x) for x in 'k l b u r'.split()]
class Shape(namedtuple('Shape', 'k,l,b,u,r')):
self = None
#classmethod
def __repr__(cls):
return "<Shape({},{},{},{},{}) object at {}>".format(
*(attr(cls.self)+[id(cls.self)]))
#classmethod
def transform(cls, func):
cls.self = cls.self._replace(**func(cls.self))
Shape.self = Shape(k=1, l=2, b=3, u=4, r=5)
s = Shape.self
def nextkind(self):
return {'k': self.k+1}
print(repr(s)) # <Shape(1,2,3,4,5) object at 139766656561792>
s.transform(nextkind)
print(repr(s)) # <Shape(2,2,3,4,5) object at 139766656561888>

How can a subclass be created from a string passed to the superclass in python?

Given a super class with two (or many) sub classes, for the sake of simplicity lets call them Super, Sub1 and Sub2 respectively.
I would like to instantiate Sub1 and Sub2 as follows:
s1 = Super('Sub1')
s2 = Super('Sub2')
i.e., passing the name of the sub classes as strings to the constructor of the super class.
Something that came to mind was defining a class variable in Super with the names of the sub classes and with a couple of if statements in the constructor of the Super class the corresponding sub class constructor could be called.
I'm not completely sure if that would work, but it just seems messy to me.
Any suggestion on how to tackle this problem with a clean and pythonic approach is welcomed.
This requires overriding Super.__new__ to make it into a factory function.
class Super:
def __new__(cls, subclass_name, *args, **kwargs):
for sc in cls.__subclasses__():
if sc.__name__ == subclass_name:
return super().__new__(sc, *args, **kwargs)
raise ValueError("No such subclass")
class Sub1(Super):
pass
class Sub2(Super):
pass
assert type(Super('Sub1')) is Sub1
This requires more work to allow you to define a subclass directly (as Sub1() will invoke Super.__new__, since Sub1.__new__ is not defined).
As such, I would prefer a dedicated class method that takes a class name, rather than overriding __new__.
class Super:
#classmethod
def subclass_by_name(cls, name, *args, **kwargs):
for sc in cls.__subclasses__():
if sc.__name__ == name:
return sc(*args, **kwargs)
raise ValueError("No such subclass")
assert type(Super.subclass_by_name('Sub1')) is Sub1
I don't really think this is a good idea. A super class is not meant as a container for a finite list of possible subclasses, that pretty much entirely goes against the point of inheritence. If I were you I would instead create a simple factory from which I can construct the objects I want, e.g.:
class Fruit():
...
class Apple(Fruit):
...
class Pear(Fruit):
...
fruits = {
"apple": Apple,
"pear": Pear
}
def make_fruit(name, **kwargs):
return fruits[name](**kwargs)
apple = make_fruit("apple", colour="red")
pear = make_fruit("pear", size="small)

How to implement a factory class?

I want to be able to create objects based on an enumeration class, and use a dictionary. Something like this:
class IngredientType(Enum):
SPAM = auto() # Some spam
BAKE_BEANS = auto() # Baked beans
EGG = auto() # Fried egg
class Ingredient(object):
pass
class Spam(Ingredient):
pass
class BakedBeans(Ingredient):
pass
class Egg(Ingredient):
pass
class IngredientFactory(object):
"""Factory makes ingredients"""
choice = {
IngredientType.SPAM: IngredientFactory.MakeSpam,
IngredientType.BAKED_BEANS: IngredientFactory.MakeBakedBeans,
IngredientType.EGG: IngredientFactory.MakeEgg
}
#staticmethod
def make(type):
method = choice[type]
return method()
#staticmethod
def makeSpam():
return Spam()
#staticmethod
def makeBakedBeans():
return BakedBeans()
#staticmethod
def makeEgg():
return Egg()
But I get the error:
NameError: name 'IngredientFactory' is not defined
For some reason the dictionary can't be created.
Where am I going wrong here?
Python is not Java and doesn't require everything to be in a class. Here your IngredientFactory class has no states and only staticmethods, so it's actually just a singleton namespace, which in python is canonically done using the module as singleton namespace and plain functions. Also since Python classes are already callable, wrapping the instanciation in a function doesn't make sense. The simple, straightforwrad pythonic implementation would be:
# ingredients.py
class IngredientType(Enum):
SPAM = auto() # Some spam
BAKE_BEANS = auto() # Baked beans
EGG = auto() # Fried egg
class Ingredient(object):
pass
class Spam(Ingredient):
pass
class Beans(Ingredient):
pass
class Egg(Ingredient):
pass
_choice = {
IngredientType.SPAM: Spam,
IngredientType.BAKED_BEANS: Beans,
IngredientType.EGG: Egg
}
def make(ingredient_type):
cls = _choice[ingredient_type]
return cls()
And the client code:
import ingredients
egg = ingredients.make(ingredients.IngredientType.EGG)
# or much more simply:
egg = ingredients.Egg()
FWIW the IngredientType enum doesn't bring much here, and even makes things more complicated that they have to be - you could just use plain strings:
# ingredients.py
class Ingredient(object):
pass
class Spam(Ingredient):
pass
class Beans(Ingredient):
pass
class Egg(Ingredient):
pass
_choice = {
"spam": Spam,
"beans": Beans,
"egg": Egg
}
def make(ingredient_type):
cls = _choice[ingredient_type]
return cls()
And the client code:
import ingredients
egg = ingredients.make("egg")
Or if you really want to use an Enum, you can at least get rid of the choices dict by using the classes themselves as values for the enum as suggested by MadPhysicist:
# ingredients.py
class Ingredient(object):
pass
class Spam(Ingredient):
pass
class Beans(Ingredient):
pass
class Egg(Ingredient):
pass
class IngredientType(Enum):
SPAM = Spam
BEANS = Beans
EGG = Egg
#staticmethod
def make(ingredient_type):
return ingredient_type.value()
and the client code
from ingredients import IngredientType
egg = IngredientType.make(IngredientType.EGG)
But I really don't see any benefit here either
EDIT: you mention:
I am trying to implement the factory pattern, with the intent of hiding the creation of objects away. The user of the factory then just handles 'Ingredients' without knowledge of the concrete type
The user still have to specify what kind of ingredients he wants (the ingredient_type argument) so I'm not sure I understand the benefit here. What's your real use case actually ? (the problem with made up / dumbed down examples is that they don't tell the whole story).
After looking at Bruce Eckel's book I came up with this:
#Based on Bruce Eckel's book Python 3 example
# A simple static factory method.
from __future__ import generators
import random
from enum import Enum, auto
class ShapeType(Enum):
CIRCLE = auto() # Some circles
SQUARE = auto() # some squares
class Shape(object):
pass
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
class ShapeFactory(object):
#staticmethod
def create(type):
#return eval(type + "()") # simple alternative
if type in ShapeFactory.choice:
return ShapeFactory.choice[type]()
assert 0, "Bad shape creation: " + type
choice = { ShapeType.CIRCLE: Circle,
ShapeType.SQUARE: Square
}
# Test factory
# Generate shape name strings:
def shapeNameGen(n):
types = list(ShapeType)
for i in range(n):
yield random.choice(types)
shapes = \
[ ShapeFactory.create(i) for i in shapeNameGen(7)]
for shape in shapes:
shape.draw()
shape.erase()
This gets the user to select a class type from the enumeration, and blocks any other type. It also means user's are less likely to write 'bad strings' with spelling mistakes. They just use the enums.
The output from the test is then, something like this:
Circle.draw
Circle.erase
Circle.draw
Circle.erase
Square.draw
Square.erase
Square.draw
Square.erase
Circle.draw
Circle.erase
Circle.draw
Circle.erase
Square.draw
Square.erase
Place your mapping at the end of the class, and reference the methods directly, since they're in the same namespace:
choice = {
IngredientType.SPAM: makeSpam,
IngredientType.BAKED_BEANS: makeBakedBeans,
IngredientType.EGG: makeEgg
}
A class object is not created until all the code in the class body, so you can't access the class itself. However, since the class body is processed in a dedicated namespace, you can access any attribute you've defined up to that point (which is why the mapping has to come at the end). Note also that while you can access globals and built-ins, you can't access the namespaces of enclosing classes or functions.
Here's the detailed but still introductory explanation from the official docs explaining how classes are executed: https://docs.python.org/3/tutorial/classes.html#a-first-look-at-classes

Polluting a class's environment

I have an object that holds lots of ids that are accessed statically. I want to split that up into another object which holds only those ids without the need of making modifications to the already existen code base. Take for example:
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car(object):
types = _CarType
I want to be able to access _CarType.DIESEL_CAR_ENGINE either by calling Car.types.DIESEL_CAR_ENGINE, either by Car.DIESEL_CAR_ENGINE for backwards compatibility with the existent code. It's clear that I cannot use __getattr__ so I am trying to find a way of making this work (maybe metaclasses ? )
Although this is not exactly what subclassing is made for, it accomplishes what you describe:
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car(_CarType):
types = _CarType
Something like:
class Car(object):
for attr, value in _CarType.__dict__.items():
it not attr.startswith('_'):
locals()[attr] = value
del attr, value
Or you can do it out of the class declaration:
class Car(object):
# snip
for attr, value in _CarType.__dict__.items():
it not attr.startswith('_'):
setattr(Car, attr, value)
del attr, value
This is how you could do this with a metaclass:
class _CarType(type):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
def __init__(self,name,bases,dct):
for key in dir(_CarType):
if key.isupper():
setattr(self,key,getattr(_CarType,key))
class Car(object):
__metaclass__=_CarType
print(Car.DIESEL_CAR_ENGINE)
print(Car.GAS_CAR_ENGINE)
Your options fall into two substantial categories: you either copy the attributes from _CarType into Car, or set Car's metaclass to a custom one with a __getattr__ method that delegates to _CarType (so it isn't exactly true that you can't use __getattr__: you can, you just need to put in in Car's metaclass rather than in Car itself;-).
The second choice has implications that you might find peculiar (unless they are specifically desired): the attributes don't show up on dir(Car), and they can't be accessed on an instance of Car, only on Car itself. I.e.:
>>> class MetaGetattr(type):
... def __getattr__(cls, nm):
... return getattr(cls.types, nm)
...
>>> class Car:
... __metaclass__ = MetaGetattr
... types = _CarType
...
>>> Car.GAS_CAR_ENGINE
1
>>> Car().GAS_CAR_ENGINE
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Car' object has no attribute 'GAS_CAR_ENGINE'
You could fix the "not from an instance" issue by also adding a __getattr__ to Car:
>>> class Car:
... __metaclass__ = MetaGetattr
... types = _CarType
... def __getattr__(self, nm):
... return getattr(self.types, nm)
...
to make both kinds of lookup work, as is probably expected:
>>> Car.GAS_CAR_ENGINE
1
>>> Car().GAS_CAR_ENGINE
1
However, defining two, essentially-equal __getattr__s, doesn't seem elegant.
So I suspect that the simpler approach, "copy all attributes", is preferable. In Python 2.6 or better, this is an obvious candidate for a class decorator:
def typesfrom(typesclass):
def decorate(cls):
cls.types = typesclass
for n in dir(typesclass):
if n[0] == '_': continue
v = getattr(typesclass, n)
setattr(cls, n, v)
return cls
return decorate
#typesfrom(_CarType)
class Car(object):
pass
In general, it's worth defining a decorator if you're using it more than once; if you only need to perform this task for one class ever, then expanding the code inline instead (after the class statement) may be better.
If you're stuck with Python 2.5 (or even 2.4), you can still define typesfrom the same way, you just apply it in a slightly less elegant matter, i.e., the Car definition becomes:
class Car(object):
pass
Car = typesfrom(_CarType)(Car)
Do remember decorator syntax (introduced in 2.2 for functions, in 2.6 for classes) is just a handy way to wrap these important and frequently recurring semantics.
class _CarType(object):
DIESEL_CAR_ENGINE = 0
GAS_CAR_ENGINE = 1 # lots of these ids
class Car:
types = _CarType
def __getattr__(self, name):
return getattr(self.types, name)
If an attribute of an object is not found, and it defines that magic method __getattr__, that gets called to try to find it.
Only works on a Car instance, not on the class.

What are some (concrete) use-cases for metaclasses?

I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
Sometimes when using a third-party
library it is useful to be able to
mutate the class in a certain way.
(This is the only case I can think of, and it's not concrete)
I was asked the same question recently, and came up with several answers. I hope it's OK to revive this thread, as I wanted to elaborate on a few of the use cases mentioned, and add a few new ones.
Most metaclasses I've seen do one of two things:
Registration (adding a class to a data structure):
models = {}
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
models[name] = cls = type.__new__(meta, name, bases, attrs)
return cls
class Model(object):
__metaclass__ = ModelMetaclass
Whenever you subclass Model, your class is registered in the models dictionary:
>>> class A(Model):
... pass
...
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...>,
'B': <__main__.B class at 0x...>}
This can also be done with class decorators:
models = {}
def model(cls):
models[cls.__name__] = cls
return cls
#model
class A(object):
pass
Or with an explicit registration function:
models = {}
def register_model(cls):
models[cls.__name__] = cls
class A(object):
pass
register_model(A)
Actually, this is pretty much the same: you mention class decorators unfavorably, but it's really nothing more than syntactic sugar for a function invocation on a class, so there's no magic about it.
Anyway, the advantage of metaclasses in this case is inheritance, as they work for any subclasses, whereas the other solutions only work for subclasses explicitly decorated or registered.
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...> # No B :(
Refactoring (modifying class attributes or adding new ones):
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
fields = {}
for key, value in attrs.items():
if isinstance(value, Field):
value.name = '%s.%s' % (name, key)
fields[key] = value
for base in bases:
if hasattr(base, '_fields'):
fields.update(base._fields)
attrs['_fields'] = fields
return type.__new__(meta, name, bases, attrs)
class Model(object):
__metaclass__ = ModelMetaclass
Whenever you subclass Model and define some Field attributes, they are injected with their names (for more informative error messages, for example), and grouped into a _fields dictionary (for easy iteration, without having to look through all the class attributes and all its base classes' attributes every time):
>>> class A(Model):
... foo = Integer()
...
>>> class B(A):
... bar = String()
...
>>> B._fields
{'foo': Integer('A.foo'), 'bar': String('B.bar')}
Again, this can be done (without inheritance) with a class decorator:
def model(cls):
fields = {}
for key, value in vars(cls).items():
if isinstance(value, Field):
value.name = '%s.%s' % (cls.__name__, key)
fields[key] = value
for base in cls.__bases__:
if hasattr(base, '_fields'):
fields.update(base._fields)
cls._fields = fields
return cls
#model
class A(object):
foo = Integer()
class B(A):
bar = String()
# B.bar has no name :(
# B._fields is {'foo': Integer('A.foo')} :(
Or explicitly:
class A(object):
foo = Integer('A.foo')
_fields = {'foo': foo} # Don't forget all the base classes' fields, too!
Although, on the contrary to your advocacy for readable and maintainable non-meta programming, this is much more cumbersome, redundant and error prone:
class B(A):
bar = String()
# vs.
class B(A):
bar = String('bar')
_fields = {'B.bar': bar, 'A.foo': A.foo}
Having considered the most common and concrete use cases, the only cases where you absolutely HAVE to use metaclasses are when you want to modify the class name or list of base classes, because once defined, these parameters are baked into the class, and no decorator or function can unbake them.
class Metaclass(type):
def __new__(meta, name, bases, attrs):
return type.__new__(meta, 'foo', (int,), attrs)
class Baseclass(object):
__metaclass__ = Metaclass
class A(Baseclass):
pass
class B(A):
pass
print A.__name__ # foo
print B.__name__ # foo
print issubclass(B, A) # False
print issubclass(B, int) # True
This may be useful in frameworks for issuing warnings whenever classes with similar names or incomplete inheritance trees are defined, but I can't think of a reason beside trolling to actually change these values. Maybe David Beazley can.
Anyway, in Python 3, metaclasses also have the __prepare__ method, which lets you evaluate the class body into a mapping other than a dict, thus supporting ordered attributes, overloaded attributes, and other wicked cool stuff:
import collections
class Metaclass(type):
#classmethod
def __prepare__(meta, name, bases, **kwds):
return collections.OrderedDict()
def __new__(meta, name, bases, attrs, **kwds):
print(list(attrs))
# Do more stuff...
class A(metaclass=Metaclass):
x = 1
y = 2
# prints ['x', 'y'] rather than ['y', 'x']
 
class ListDict(dict):
def __setitem__(self, key, value):
self.setdefault(key, []).append(value)
class Metaclass(type):
#classmethod
def __prepare__(meta, name, bases, **kwds):
return ListDict()
def __new__(meta, name, bases, attrs, **kwds):
print(attrs['foo'])
# Do more stuff...
class A(metaclass=Metaclass):
def foo(self):
pass
def foo(self, x):
pass
# prints [<function foo at 0x...>, <function foo at 0x...>] rather than <function foo at 0x...>
You might argue ordered attributes can be achieved with creation counters, and overloading can be simulated with default arguments:
import itertools
class Attribute(object):
_counter = itertools.count()
def __init__(self):
self._count = Attribute._counter.next()
class A(object):
x = Attribute()
y = Attribute()
A._order = sorted([(k, v) for k, v in vars(A).items() if isinstance(v, Attribute)],
key = lambda (k, v): v._count)
 
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=None):
if x is None:
return self._foo0()
else:
return self._foo1(x)
Besides being much more ugly, it's also less flexible: what if you want ordered literal attributes, like integers and strings? What if None is a valid value for x?
Here's a creative way to solve the first problem:
import sys
class Builder(object):
def __call__(self, cls):
cls._order = self.frame.f_code.co_names
return cls
def ordered():
builder = Builder()
def trace(frame, event, arg):
builder.frame = frame
sys.settrace(None)
sys.settrace(trace)
return builder
#ordered()
class A(object):
x = 1
y = 'foo'
print A._order # ['x', 'y']
And here's a creative way to solve the second one:
_undefined = object()
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=_undefined):
if x is _undefined:
return self._foo0()
else:
return self._foo1(x)
But this is much, MUCH voodoo-er than a simple metaclass (especially the first one, which really melts your brain). My point is, you look at metaclasses as unfamiliar and counter-intuitive, but you can also look at them as the next step of evolution in programming languages: you just have to adjust your mindset. After all, you could probably do everything in C, including defining a struct with function pointers and passing it as the first argument to its functions. A person seeing C++ for the first time might say, "what is this magic? Why is the compiler implicitly passing this to methods, but not to regular and static functions? It's better to be explicit and verbose about your arguments". But then, object-oriented programming is much more powerful once you get it; and so is this, uh... quasi-aspect-oriented programming, I guess. And once you understand metaclasses, they're actually very simple, so why not use them when convenient?
And finally, metaclasses are rad, and programming should be fun. Using standard programming constructs and design patterns all the time is boring and uninspiring, and hinders your imagination. Live a little! Here's a metametaclass, just for you.
class MetaMetaclass(type):
def __new__(meta, name, bases, attrs):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, name, bases, attrs)
cls._label = 'Made in %s' % meta.__name__
return cls
attrs['__new__'] = __new__
return type.__new__(meta, name, bases, attrs)
class China(type):
__metaclass__ = MetaMetaclass
class Taiwan(type):
__metaclass__ = MetaMetaclass
class A(object):
__metaclass__ = China
class B(object):
__metaclass__ = Taiwan
print A._label # Made in China
print B._label # Made in Taiwan
Edit
This is a pretty old question, but it's still getting upvotes, so I thought I'd add a link to a more comprehensive answer. If you'd like to read more about metaclasses and their uses, I've just published an article about it here.
The purpose of metaclasses isn't to replace the class/object distinction with metaclass/class - it's to change the behaviour of class definitions (and thus their instances) in some way. Effectively it's to alter the behaviour of the class statement in ways that may be more useful for your particular domain than the default. The things I have used them for are:
Tracking subclasses, usually to register handlers. This is handy when using a plugin style setup, where you wish to register a handler for a particular thing simply by subclassing and setting up a few class attributes. eg. suppose you write a handler for various music formats, where each class implements appropriate methods (play / get tags etc) for its type. Adding a handler for a new type becomes:
class Mp3File(MusicFile):
extensions = ['.mp3'] # Register this type as a handler for mp3 files
...
# Implementation of mp3 methods go here
The metaclass then maintains a dictionary of {'.mp3' : MP3File, ... } etc, and constructs an object of the appropriate type when you request a handler through a factory function.
Changing behaviour. You may want to attach a special meaning to certain attributes, resulting in altered behaviour when they are present. For example, you may want to look for methods with the name _get_foo and _set_foo and transparently convert them to properties. As a real-world example, here's a recipe I wrote to give more C-like struct definitions. The metaclass is used to convert the declared items into a struct format string, handling inheritance etc, and produce a class capable of dealing with it.
For other real-world examples, take a look at various ORMs, like sqlalchemy's ORM or sqlobject. Again, the purpose is to interpret defintions (here SQL column definitions) with a particular meaning.
I have a class that handles non-interactive plotting, as a frontend to Matplotlib. However, on occasion one wants to do interactive plotting. With only a couple functions I found that I was able to increment the figure count, call draw manually, etc, but I needed to do these before and after every plotting call. So to create both an interactive plotting wrapper and an offscreen plotting wrapper, I found it was more efficient to do this via metaclasses, wrapping the appropriate methods, than to do something like:
class PlottingInteractive:
add_slice = wrap_pylab_newplot(add_slice)
This method doesn't keep up with API changes and so on, but one that iterates over the class attributes in __init__ before re-setting the class attributes is more efficient and keeps things up to date:
class _Interactify(type):
def __init__(cls, name, bases, d):
super(_Interactify, cls).__init__(name, bases, d)
for base in bases:
for attrname in dir(base):
if attrname in d: continue # If overridden, don't reset
attr = getattr(cls, attrname)
if type(attr) == types.MethodType:
if attrname.startswith("add_"):
setattr(cls, attrname, wrap_pylab_newplot(attr))
elif attrname.startswith("set_"):
setattr(cls, attrname, wrap_pylab_show(attr))
Of course, there might be better ways to do this, but I've found this to be effective. Of course, this could also be done in __new__ or __init__, but this was the solution I found the most straightforward.
Let's start with Tim Peter's classic quote:
Metaclasses are deeper magic than 99%
of users should ever worry about. If
you wonder whether you need them, you
don't (the people who actually need
them know with certainty that they
need them, and don't need an
explanation about why). Tim Peters
(c.l.p post 2002-12-22)
Having said that, I have (periodically) run across true uses of metaclasses. The one that comes to mind is in Django where all of your models inherit from models.Model. models.Model, in turn, does some serious magic to wrap your DB models with Django's ORM goodness. That magic happens by way of metaclasses. It creates all manner of exception classes, manager classes, etc. etc.
See django/db/models/base.py, class ModelBase() for the beginning of the story.
A reasonable pattern of metaclass use is doing something once when a class is defined rather than repeatedly whenever the same class is instantiated.
When multiple classes share the same special behaviour, repeating __metaclass__=X is obviously better than repeating the special purpose code and/or introducing ad-hoc shared superclasses.
But even with only one special class and no foreseeable extension, __new__ and __init__ of a metaclass are a cleaner way to initialize class variables or other global data than intermixing special-purpose code and normal def and class statements in the class definition body.
Metaclasses can be handy for construction of Domain Specific Languages in Python. Concrete examples are Django, SQLObject 's declarative syntax of database schemata.
A basic example from A Conservative Metaclass by Ian Bicking:
The metaclasses I've used have been
primarily to support a sort of
declarative style of programming. For
instance, consider a validation
schema:
class Registration(schema.Schema):
first_name = validators.String(notEmpty=True)
last_name = validators.String(notEmpty=True)
mi = validators.MaxLength(1)
class Numbers(foreach.ForEach):
class Number(schema.Schema):
type = validators.OneOf(['home', 'work'])
phone_number = validators.PhoneNumber()
Some other techniques: Ingredients for Building a DSL in Python (pdf).
Edit (by Ali): An example of doing this using collections and instances is what I would prefer. The important fact is the instances, which give you more power, and eliminate reason to use metaclasses. Further worth noting that your example uses a mixture of classes and instances, which is surely an indication that you can't just do it all with metaclasses. And creates a truly non-uniform way of doing it.
number_validator = [
v.OneOf('type', ['home', 'work']),
v.PhoneNumber('phone_number'),
]
validators = [
v.String('first_name', notEmpty=True),
v.String('last_name', notEmpty=True),
v.MaxLength('mi', 1),
v.ForEach([number_validator,])
]
It's not perfect, but already there is almost zero magic, no need for metaclasses, and improved uniformity.
I was thinking the same thing just yesterday and completely agree. The complications in the code caused by attempts to make it more declarative generally make the codebase harder to maintain, harder to read and less pythonic in my opinion.
It also normally requires a lot of copy.copy()ing (to maintain inheritance and to copy from class to instance) and means you have to look in many places to see whats going on (always looking from metaclass up) which goes against the python grain also.
I have been picking through formencode and sqlalchemy code to see if such a declarative style was worth it and its clearly not. Such style should be left to descriptors (such as property and methods) and immutable data.
Ruby has better support for such declarative styles and I am glad the core python language is not going down that route.
I can see their use for debugging, add a metaclass to all your base classes to get richer info.
I also see their use only in (very) large projects to get rid of some boilerplate code (but at the loss of clarity). sqlalchemy for example does use them elsewhere, to add a particular custom method to all subclasses based on an attribute value in their class definition
e.g a toy example
class test(baseclass_with_metaclass):
method_maker_value = "hello"
could have a metaclass that generated a method in that class with special properties based on "hello" (say a method that added "hello" to the end of a string). It could be good for maintainability to make sure you did not have to write a method in every subclass you make instead all you have to define is method_maker_value.
The need for this is so rare though and only cuts down on a bit of typing that its not really worth considering unless you have a large enough codebase.
The only legitimate use-case of a metaclass is to keep other nosy developers from touching your code. Once a nosy developer masters metaclasses and starts poking around with yours, throw in another level or two to keep them out. If that doesn't work, start using type.__new__ or perhaps some scheme using a recursive metaclass.
(written tongue in cheek, but I've seen this kind of obfuscation done. Django is a perfect example)
The only time I used metaclasses in Python was when writing a wrapper for the Flickr API.
My goal was to scrape flickr's api site and dynamically generate a complete class hierarchy to allow API access using Python objects:
# Both the photo type and the flickr.photos.search API method
# are generated at "run-time"
for photo in flickr.photos.search(text=balloons):
print photo.description
So in that example, because I generated the entire Python Flickr API from the website, I really don't know the class definitions at runtime. Being able to dynamically generate types was very useful.
Metaclasses aren't replacing programming! They're just a trick which can automate or make more elegant some tasks. A good example of this is Pygments syntax highlighting library. It has a class called RegexLexer which lets the user define a set of lexing rules as regular expressions on a class. A metaclass is used to turn the definitions into a useful parser.
They're like salt; it's easy to use too much.
Some GUI libraries have trouble when multiple threads try to interact with them. tkinter is one such example; and while one can explicitly handle the problem with events and queues, it can be far simpler to use the library in a manner that ignores the problem altogether. Behold -- the magic of metaclasses.
Being able to dynamically rewrite an entire library seamlessly so that it works properly as expected in a multithreaded application can be extremely helpful in some circumstances. The safetkinter module does that with the help of a metaclass provided by the threadbox module -- events and queues not needed.
One neat aspect of threadbox is that it does not care what class it clones. It provides an example of how all base classes can be touched by a metaclass if needed. A further benefit that comes with metaclasses is that they run on inheriting classes as well. Programs that write themselves -- why not?
You never absolutely need to use a metaclass, since you can always construct a class that does what you want using inheritance or aggregation of the class you want to modify.
That said, it can be very handy in Smalltalk and Ruby to be able to modify an existing class, but Python doesn't like to do that directly.
There's an excellent DeveloperWorks article on metaclassing in Python that might help. The Wikipedia article is also pretty good.
The way I used metaclasses was to provide some attributes to classes. Take for example:
class NameClass(type):
def __init__(cls, *args, **kwargs):
type.__init__(cls, *args, **kwargs)
cls.name = cls.__name__
will put the name attribute on every class that will have the metaclass set to point to NameClass.
This is a minor use, but... one thing I've found metaclasses useful for is to invoke a function whenever a subclass is created. I codified this into a metaclass which looks for an __initsubclass__ attribute: whenever a subclass is created, all parent classes which define that method are invoked with __initsubclass__(cls, subcls). This allows creation of a parent class which then registers all subclasses with some global registry, runs invariant checks on subclasses whenever they are defined, perform late-binding operations, etc... all without have to manually call functions or to create custom metaclasses that perform each of these separate duties.
Mind you, I've slowly come to realize the implicit magicalness of this behavior is somewhat undesirable, since it's unexpected if looking at a class definition out of context... and so I've moved away from using that solution for anything serious besides initializing a __super attribute for each class and instance.
I recently had to use a metaclass to help declaratively define an SQLAlchemy model around a database table populated with U.S. Census data from http://census.ire.org/data/bulkdata.html
IRE provides database shells for the census data tables, which create integer columns following a naming convention from the Census Bureau of p012015, p012016, p012017, etc.
I wanted to a) be able to access these columns using a model_instance.p012017 syntax, b) be fairly explicit about what I was doing and c) not have to explicitly define dozens of fields on the model, so I subclassed SQLAlchemy's DeclarativeMeta to iterate through a range of the columns and automatically create model fields corresponding to the columns:
from sqlalchemy.ext.declarative.api import DeclarativeMeta
class CensusTableMeta(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
table = 'p012'
for i in range(1, 49):
fname = "%s%03d" % (table, i)
dict_[fname] = Column(Integer)
setattr(cls, fname, dict_[fname])
super(CensusTableMeta, cls).__init__(classname, bases, dict_)
I could then use this metaclass for my model definition and access the automatically enumerated fields on the model:
CensusTableBase = declarative_base(metaclass=CensusTableMeta)
class P12Tract(CensusTableBase):
__tablename__ = 'ire_p12'
geoid = Column(String(12), primary_key=True)
#property
def male_under_5(self):
return self.p012003
...
There seems to be a legitimate use described here - Rewriting Python Docstrings with a Metaclass.
Pydantic is a library for data validation and settings management that enforces type hints at runtime and provides user friendly errors when data is invalid. It makes use of metaclasses for its BaseModel and for number range validation.
At work I encountered some code that had a process that had several stages defined by classes. The ordering of these steps was controlled by metaclasses that added the steps to a list as the classes were defined. This was thrown out and the order was set by adding them to a list.
I had to use them once for a binary parser to make it easier to use. You define a message class with attributes of the fields present on the wire.
They needed to be ordered in the way they were declared to construct the final wire format from it. You can do that with metaclasses, if you use an ordered namespace dict. In fact, its in the examples for Metaclasses:
https://docs.python.org/3/reference/datamodel.html#metaclass-example
But in general: Very carefully evaluate, if you really really need the added complexity of metaclasses.
the answer from #Dan Gittik is cool
the examples at the end could clarify many things,I changed it to python 3 and give some explanation:
class MetaMetaclass(type):
def __new__(meta, name, bases, attrs):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, name, bases, attrs)
cls._label = 'Made in %s' % meta.__name__
return cls
attrs['__new__'] = __new__
return type.__new__(meta, name, bases, attrs)
#China is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class China(MetaMetaclass, metaclass=MetaMetaclass):
__metaclass__ = MetaMetaclass
#Taiwan is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class Taiwan(MetaMetaclass, metaclass=MetaMetaclass):
__metaclass__ = MetaMetaclass
#A is a normal class and it's __new__ method would be changed by China(metaclass)
class A(metaclass=China):
__metaclass__ = China
#B is a normal class and it's __new__ method would be changed by Taiwan(metaclass)
class B(metaclass=Taiwan):
__metaclass__ = Taiwan
print(A._label) # Made in China
print(B._label) # Made in Taiwan
everything is object,so class is object
class object is created by metaclass
all class inheritted from type is metaclass
metaclass could control class creating
metaclass could control metaclass creating too(so it could loop for ever)
this's metaprograming...you could control the type system at running time
again,everything is object,this's a uniform system,type create type,and type create instance
Another use case is when you want to be able to modify class-level attributes and be sure that it only affects the object at hand. In practice, this implies "merging" the phases of metaclasses and classes instantiations, thus leading you to deal only with class instances of their own (unique) kind.
I also had to do that when (for concerns of readibility and polymorphism) we wanted to dynamically define propertys which returned values (may) result from calculations based on (often changing) instance-level attributes, which can only be done at the class level, i.e. after the metaclass instantiation and before the class instantiation.
I know this is an old question But here is a use case that is really invaluable if wanting to create only a single instance of a class based on the parameters passed to the constructor.
Instance singletons
I use this code for creating a singleton instance of a device on a Z-Wave network. No matter how many times I create an instance if the same values are passed to the constructor if an instance with the exact same values exists then that is what gets returned.
import inspect
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
cls._instances[key] = (
super(SingletonMeta, cls).__call__(*args, **kwargs)
)
return cls._instances[key]
class Test1(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
class Test2(metaclass=SingletonMeta):
def __init__(self, param3='test1', param4='test2'):
pass
test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')
test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3
Now someone might say it can be done without the use of a metaclass and I know it can be done if the __init__ method is decorated. I do not know of another way to do it. The code below while it will return a similiar instance that contains all of the same data it is not a singleton instance, a new instance gets created. Because it creates a new instance with the same data there wuld need to be additional steps taken to check equality of instances. I n the end it consumes more memory then using a metaclass and with the meta class no additional steps need to be taken to check equality.
class Singleton(object):
_instances = {}
def __init__(self, param1, param2='test'):
key = (param1, param2)
if key in self._instances:
self.__dict__.update(self._instances[key].__dict__)
else:
self.param1 = param1
self.param2 = param2
self._instances[key] = self
test1 = Singleton('test1', 'test2')
test2 = Singleton('test')
test3 = Singleton('test', 'test')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test1 params', test1.param1, test1.param2)
print('test2 params', test2.param1, test2.param2)
print('test3 params', test3.param1, test3.param2)
print('number of Singleton instances:', len(Singleton._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: False
test1 params test1 test2
test2 params test test
test3 params test test
number of Singleton instances: 2
The metaclass approach is really nice to use if needing to check for the removal or addition of a new instance as well.
import inspect
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
cls._instances[key] = (
super(SingletonMeta, cls).__call__(*args, **kwargs)
)
return cls._instances[key]
class Test(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
instances = []
instances.append(Test('test1', 'test2'))
instances.append(Test('test1', 'test'))
print('number of instances:', len(instances))
instance = Test('test2', 'test3')
if instance not in instances:
instances.append(instance)
instance = Test('test1', 'test2')
if instance not in instances:
instances.append(instance)
print('number of instances:', len(instances))
output
number of instances: 2
number of instances: 3
Here is a way to remove an instance that has been created after the instance is no longer in use.
import inspect
import weakref
class SingletonMeta(type):
# only here to make IDE happy
_instances = {}
def __init__(cls, name, bases, dct):
super(SingletonMeta, cls).__init__(name, bases, dct)
def remove_instance(c, ref):
for k, v in list(c._instances.items())[:]:
if v == ref:
del cls._instances[k]
break
cls.remove_instance = classmethod(remove_instance)
cls._instances = {}
def __call__(cls, *args, **kwargs):
sig = inspect.signature(cls.__init__)
keywords = {}
for i, param in enumerate(list(sig.parameters.values())[1:]):
if len(args) > i:
keywords[param.name] = args[i]
elif param.name not in kwargs and param.default != param.empty:
keywords[param.name] = param.default
elif param.name in kwargs:
keywords[param.name] = kwargs[param.name]
key = []
for k in sorted(list(keywords.keys())):
key.append(keywords[k])
key = tuple(key)
if key not in cls._instances:
instance = super(SingletonMeta, cls).__call__(*args, **kwargs)
cls._instances[key] = weakref.ref(
instance,
instance.remove_instance
)
return cls._instances[key]()
class Test1(metaclass=SingletonMeta):
def __init__(self, param1, param2='test'):
pass
class Test2(metaclass=SingletonMeta):
def __init__(self, param3='test1', param4='test2'):
pass
test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')
test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')
print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
print()
del test1
del test5
del test6
print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))
output
test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3
number of Test1 instances: 2
number of Test2 instances: 1
if you look at the output you will notice that the number of Test1 instances has not changed. That is because test1 and test3 are the same instance and I only deleted test1 so there is still a reference to the test1 instance in the code and as a result of that the test1 instance does not get removed.
Another nice feature of this is if the instance uses only the supplied parameters to do whatever it is tasked to do then you can use the metaclass to facilitate remote creations of the instance either on a different computer entirely or in a different process on the same machine. the parameters can simply be passed over a socket or a named pipe and a replica of the class can be created on the receiving end.

Categories