This question already has answers here:
Creating a singleton in Python
(38 answers)
Closed 4 years ago.
There seem to be many ways to define singletons in Python. Is there a consensus opinion on Stack Overflow?
I don't really see the need, as a module with functions (and not a class) would serve well as a singleton. All its variables would be bound to the module, which could not be instantiated repeatedly anyway.
If you do wish to use a class, there is no way of creating private classes or private constructors in Python, so you can't protect against multiple instantiations, other than just via convention in use of your API. I would still just put methods in a module, and consider the module as the singleton.
Here's my own implementation of singletons. All you have to do is decorate the class; to get the singleton, you then have to use the Instance method. Here's an example:
#Singleton
class Foo:
def __init__(self):
print 'Foo created'
f = Foo() # Error, this isn't how you get the instance of a singleton
f = Foo.instance() # Good. Being explicit is in line with the Python Zen
g = Foo.instance() # Returns already created instance
print f is g # True
And here's the code:
class Singleton:
"""
A non-thread-safe helper class to ease implementing singletons.
This should be used as a decorator -- not a metaclass -- to the
class that should be a singleton.
The decorated class can define one `__init__` function that
takes only the `self` argument. Also, the decorated class cannot be
inherited from. Other than that, there are no restrictions that apply
to the decorated class.
To get the singleton instance, use the `instance` method. Trying
to use `__call__` will result in a `TypeError` being raised.
"""
def __init__(self, decorated):
self._decorated = decorated
def instance(self):
"""
Returns the singleton instance. Upon its first call, it creates a
new instance of the decorated class and calls its `__init__` method.
On all subsequent calls, the already created instance is returned.
"""
try:
return self._instance
except AttributeError:
self._instance = self._decorated()
return self._instance
def __call__(self):
raise TypeError('Singletons must be accessed through `instance()`.')
def __instancecheck__(self, inst):
return isinstance(inst, self._decorated)
You can override the __new__ method like this:
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Singleton, cls).__new__(
cls, *args, **kwargs)
return cls._instance
if __name__ == '__main__':
s1 = Singleton()
s2 = Singleton()
if (id(s1) == id(s2)):
print "Same"
else:
print "Different"
A slightly different approach to implement the singleton in Python is the borg pattern by Alex Martelli (Google employee and Python genius).
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
So instead of forcing all instances to have the same identity, they share state.
The module approach works well. If I absolutely need a singleton I prefer the Metaclass approach.
class Singleton(type):
def __init__(cls, name, bases, dict):
super(Singleton, cls).__init__(name, bases, dict)
cls.instance = None
def __call__(cls,*args,**kw):
if cls.instance is None:
cls.instance = super(Singleton, cls).__call__(*args, **kw)
return cls.instance
class MyClass(object):
__metaclass__ = Singleton
See this implementation from PEP318, implementing the singleton pattern with a decorator:
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
The Python documentation does cover this:
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
I would probably rewrite it to look more like this:
class Singleton(object):
"""Use to create a singleton"""
def __new__(cls, *args, **kwds):
"""
>>> s = Singleton()
>>> p = Singleton()
>>> id(s) == id(p)
True
"""
it_id = "__it__"
# getattr will dip into base classes, so __dict__ must be used
it = cls.__dict__.get(it_id, None)
if it is not None:
return it
it = object.__new__(cls)
setattr(cls, it_id, it)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
class A(Singleton):
pass
class B(Singleton):
pass
class C(A):
pass
assert A() is A()
assert B() is B()
assert C() is C()
assert A() is not B()
assert C() is not B()
assert C() is not A()
It should be relatively clean to extend this:
class Bus(Singleton):
def init(self, label=None, *args, **kwds):
self.label = label
self.channels = [Channel("system"), Channel("app")]
...
As the accepted answer says, the most idiomatic way is to just use a module.
With that in mind, here's a proof of concept:
def singleton(cls):
obj = cls()
# Always return the same object
cls.__new__ = staticmethod(lambda cls: obj)
# Disable __init__
try:
del cls.__init__
except AttributeError:
pass
return cls
See the Python data model for more details on __new__.
Example:
#singleton
class Duck(object):
pass
if Duck() is Duck():
print "It works!"
else:
print "It doesn't work!"
Notes:
You have to use new-style classes (derive from object) for this.
The singleton is initialized when it is defined, rather than the first time it's used.
This is just a toy example. I've never actually used this in production code, and don't plan to.
I'm very unsure about this, but my project uses 'convention singletons' (not enforced singletons), that is, if I have a class called DataController, I define this in the same module:
_data_controller = None
def GetDataController():
global _data_controller
if _data_controller is None:
_data_controller = DataController()
return _data_controller
It is not elegant, since it's a full six lines. But all my singletons use this pattern, and it's at least very explicit (which is pythonic).
The one time I wrote a singleton in Python I used a class where all the member functions had the classmethod decorator.
class Foo:
x = 1
#classmethod
def increment(cls, y=1):
cls.x += y
Creating a singleton decorator (aka an annotation) is an elegant way if you want to decorate (annotate) classes going forward. Then you just put #singleton before your class definition.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
There are also some interesting articles on the Google Testing blog, discussing why singleton are/may be bad and are an anti-pattern:
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
I think that forcing a class or an instance to be a singleton is overkill. Personally, I like to define a normal instantiable class, a semi-private reference, and a simple factory function.
class NothingSpecial:
pass
_the_one_and_only = None
def TheOneAndOnly():
global _the_one_and_only
if not _the_one_and_only:
_the_one_and_only = NothingSpecial()
return _the_one_and_only
Or if there is no issue with instantiating when the module is first imported:
class NothingSpecial:
pass
THE_ONE_AND_ONLY = NothingSpecial()
That way you can write tests against fresh instances without side effects, and there is no need for sprinkling the module with global statements, and if needed you can derive variants in the future.
The Singleton Pattern implemented with Python courtesy of ActiveState.
It looks like the trick is to put the class that's supposed to only have one instance inside of another class.
class Singleton(object[,...]):
staticVar1 = None
staticVar2 = None
def __init__(self):
if self.__class__.staticVar1==None :
# create class instance variable for instantiation of class
# assign class instance variable values to class static variables
else:
# assign class static variable values to class instance variables
class Singeltone(type):
instances = dict()
def __call__(cls, *args, **kwargs):
if cls.__name__ not in Singeltone.instances:
Singeltone.instances[cls.__name__] = type.__call__(cls, *args, **kwargs)
return Singeltone.instances[cls.__name__]
class Test(object):
__metaclass__ = Singeltone
inst0 = Test()
inst1 = Test()
print(id(inst1) == id(inst0))
OK, singleton could be good or evil, I know. This is my implementation, and I simply extend a classic approach to introduce a cache inside and produce many instances of a different type or, many instances of same type, but with different arguments.
I called it Singleton_group, because it groups similar instances together and prevent that an object of the same class, with same arguments, could be created:
# Peppelinux's cached singleton
class Singleton_group(object):
__instances_args_dict = {}
def __new__(cls, *args, **kwargs):
if not cls.__instances_args_dict.get((cls.__name__, args, str(kwargs))):
cls.__instances_args_dict[(cls.__name__, args, str(kwargs))] = super(Singleton_group, cls).__new__(cls, *args, **kwargs)
return cls.__instances_args_dict.get((cls.__name__, args, str(kwargs)))
# It's a dummy real world use example:
class test(Singleton_group):
def __init__(self, salute):
self.salute = salute
a = test('bye')
b = test('hi')
c = test('bye')
d = test('hi')
e = test('goodbye')
f = test('goodbye')
id(a)
3070148780L
id(b)
3070148908L
id(c)
3070148780L
b == d
True
b._Singleton_group__instances_args_dict
{('test', ('bye',), '{}'): <__main__.test object at 0xb6fec0ac>,
('test', ('goodbye',), '{}'): <__main__.test object at 0xb6fec32c>,
('test', ('hi',), '{}'): <__main__.test object at 0xb6fec12c>}
Every object carries the singleton cache... This could be evil, but it works great for some :)
My simple solution which is based on the default value of function parameters.
def getSystemContext(contextObjList=[]):
if len( contextObjList ) == 0:
contextObjList.append( Context() )
pass
return contextObjList[0]
class Context(object):
# Anything you want here
Being relatively new to Python I'm not sure what the most common idiom is, but the simplest thing I can think of is just using a module instead of a class. What would have been instance methods on your class become just functions in the module and any data just becomes variables in the module instead of members of the class. I suspect this is the pythonic approach to solving the type of problem that people use singletons for.
If you really want a singleton class, there's a reasonable implementation described on the first hit on Google for "Python singleton", specifically:
class Singleton:
__single = None
def __init__( self ):
if Singleton.__single:
raise Singleton.__single
Singleton.__single = self
That seems to do the trick.
Singleton's half brother
I completely agree with staale and I leave here a sample of creating a singleton half brother:
class void:pass
a = void();
a.__class__ = Singleton
a will report now as being of the same class as singleton even if it does not look like it. So singletons using complicated classes end up depending on we don't mess much with them.
Being so, we can have the same effect and use simpler things like a variable or a module. Still, if we want use classes for clarity and because in Python a class is an object, so we already have the object (not and instance, but it will do just like).
class Singleton:
def __new__(cls): raise AssertionError # Singletons can't have instances
There we have a nice assertion error if we try to create an instance, and we can store on derivations static members and make changes to them at runtime (I love Python). This object is as good as other about half brothers (you still can create them if you wish), however it will tend to run faster due to simplicity.
In cases where you don't want the metaclass-based solution above, and you don't like the simple function decorator-based approach (e.g. because in that case static methods on the singleton class won't work), this compromise works:
class singleton(object):
"""Singleton decorator."""
def __init__(self, cls):
self.__dict__['cls'] = cls
instances = {}
def __call__(self):
if self.cls not in self.instances:
self.instances[self.cls] = self.cls()
return self.instances[self.cls]
def __getattr__(self, attr):
return getattr(self.__dict__['cls'], attr)
def __setattr__(self, attr, value):
return setattr(self.__dict__['cls'], attr, value)
Related
Preface: I'm trying to guard against misuse (mostly by myself) and not malicious use (thus the "consenting adults" principle does not apply).
I'm trying to implement something like this:
class Foo(Base):
...
class Bar(Base):
...
class FooBarFactory:
__bar_cache = BarCache()
#classmethod
def createFoo(cls):
return Foo()
#classmethod
def createBar(cls, key):
return cls.__bar_cache.get_or_create(key)
The problem is that I want to restrict Foo and Bar creation to only FooBarFactory's methods. So,
foo = FooBarFactory.createFoo() # OK
foo = Foo() # raise AssertionError('use factory method')
How do I do that? One option that I see is to put Foo and Bar inside the factory class (to ensure that code users know about the factory). But that would produce a bloated class definition. Another option is to do something like this:
class Foo:
_trade_secret = 'super_secret_foo_message_dont_use'
def __init__(self, secret):
assert secret == Foo._trade_secret
...
class FooBarFactory:
...
#classmethod
def createFoo(cls):
# suppress private field access warning here
return Foo(Foo._trade_secret)
But that also looks clumsy and verbose.
Any help is greatly appreciated. Thanks!
If your factory can do it, then everyone else can. There is no solution for this in python as noone has special privileges.
On the other hand, while you can't force people to code properly, you can make it hard for them to screw up:
class Foo:
def __new__(*args, **kwargs):
raise NotImplementedError("Use the factory.")
#classmethod
def _new(cls, *args, **kwargs):
foo = super().__new__(cls)
foo.__init__(*args, **kwargs)
return foo
class Factory:
#staticmethod
def createFoo(*args, **kwargs):
return Foo._new(*args, **kwargs)
Factory.createFoo() # works fine
Foo() # raises an exception
But if your users want to call Foo._new then nothing will stop them from creating an object without "your permission".
You can use sys._getframe(1) to get the caller's frame, where you can obtain the caller's cls local variable and the caller's function name. To make sure someone isn't calling Foo.__new__ from a different class with the same name and the same method name, you can check if the filename of the caller's frame is the same as the filename of the current frame:
import sys
class Foo:
def __new__(cls):
caller_frame = sys._getframe(1)
if 'cls' not in caller_frame.f_locals or \
caller_frame.f_locals["cls"].__name__ != 'FooBarFactory' or \
caller_frame.f_code.co_name != 'createFoo' or \
caller_frame.f_code.co_filename != sys._getframe(0).f_code.co_filename:
raise RuntimeError('Foo must be instantiated via the FooBarFactory.createFoo method.')
print('Foo OK')
return super().__new__(cls)
class FooBarFactory:
#classmethod
def createFoo(cls):
return Foo()
so that:
FooBarFactory.createFoo()
outputs:
Foo OK
and:
Foo()
outputs:
RuntimeError: Foo must be instantiated via the FooBarFactory.createFoo method.
Or since you supposedly control your own file, and the FooBarFactor.createFoo method is supposedly the only caller you have in the file that instantiates Foo, the filename check alone should just be enough:
class Foo:
def __new__(cls):
if sys._getframe(1).f_code.co_filename != sys._getframe(0).f_code.co_filename:
raise RuntimeError('Foo must be instantiated via the FooBarFactory.createFoo method.')
return super().__new__(cls)
I came up with the following solution in addition to the various options in other answers:
class Base:
def __new__(cls, *args, **kwargs):
assert kwargs.get('secret') == Factory._secret, 'use the factory'
return super(Base, cls).__new__(cls)
class Foo(Base):
def __init__(self, param, **kwargs):
self.param = param
class Factory:
_secret = 'super_secret_dont_copy'
#classmethod
def create_foo(cls, param):
return Foo(param=param, secret=cls._secret)
Now,
foo = Foo(23) # AssertionError
foo = Factory.create_foo(23) # OK
The solution allows to minimize extra code for additional sub-classes of Base (the validation code is encapsulated in the Base class), but it has a drawback of having to add **kwargs to all sub-classes' __init__.
I have a boiler platey class that delegates some actions to a reference class. It looks like this:
class MyClass():
def __init__(self, someClass):
self.refClass = someClass
def action1(self):
self.refClass.action1()
def action2(self):
self.refClass.action2()
def action3(self):
self.refClass.action3()
This is the refClass:
class RefClass():
def __init__(self):
self.myClass = MyClass(self)
def action1(self):
#Stuff to execute action1
def action2(self):
#Stuff to execute action2
def action3(self):
#Stuff to execute action3
I'd like to use Python Metaprogramming to make this more elegant and readable, but I'm not sure how.
I've heard of setattr and getattr, and I think I could do something like
class MyClass():
def __init__(self, someClass):
self.refClass = someClass
for action in ['action1', 'action2', 'action3']:
def _delegate(self):
getattr(self.refClass, action)()
And then I know I need to do this from somewhere, I guess:
MyClass.setattr(action, delegate)
I just can't totally grasp this concept. I understand the basics about not repeating code, and generating the methods with a for loop with functional programming, but then I don't know how to call this methods from elsewhere. Heeeelp!
Python already includes support for generalized delegation to a contained class. Just change the definition of MyClass to:
class MyClass:
def __init__(self, someClass):
self.refClass = someClass # Note: You call this someClass, but it's actually some object, not some class in your example
def __getattr__(self, name):
return getattr(self.refClass, name)
When defined, __getattr__ is called on the instance with the name of the accessed attribute any time an attribute is not found on the instance itself. You then delegate to the contained object by calling getattr to look up the attribute on the contained object and return it. This costs a little each time to do the dynamic lookup, so if you want to avoid it, you can lazily cache attributes when they're first requested by __getattr__, so subsequent access is direct:
def __getattr__(self, name):
attr = getattr(self.refClass, name)
setattr(self, name, attr)
return attr
Personally, for delegating things I usually do something like that:
def delegate(prop_name, meth_name):
def proxy(self, *args, **kwargs):
prop = getattr(self, prop_name)
meth = getattr(prop, meth_name)
return meth(*args, **kwargs)
return proxy
class MyClass(object):
def __init__(self, someClass):
self.refClass = someClass
action1 = delegate('refClass', 'action1')
action2 = delegate('refClass', 'action2')
This will create all delegate methods you need :)
For some explanations, the delegate function here just create a "proxy" function which will act as a class method (see the self argument?) and will pass all arguments given to it to the referenced object's method with the args and kwargs arguments (see *args and **kwargs? for more informations about these arguments)
You can create this with a list too, but I prefer the first because it's more explicit for me :)
class MyClass(object):
delegated_methods = ['action1', 'action2']
def __init__(self, someClass):
self.refClass = someClass
for meth_name in self.delegated_methods:
setattr(self, meth_name, delegate('refClass', meth_name))
I have a subclass that adds graphics capabilities to a superclass that implements the algorithms. So, in addition to a few extra initialization functions, this subclass will only need to refresh the graphics after the execution of each algorithm-computing function in the superclass.
Classes:
class graph(algorithms):
... #initialization and refresh decorators
#refreshgraph
def algorithm1(self, *args, **kwargs):
return algorithms.algorithm1(self, *args, **kwargs)
#refreshgraph
def algorithm2(self, *args, **kwargs):
return algorithms.algorithm2(self, *args, **kwargs)
... #and so on
Is there an pythonic way to automatically decorate all the non-private methods defined in the superclass, such that if I add a new algorithm there I don't need to explicitly mention it in my subclass? I would also like to be able to explicitly exclude some of the superclass' methods.
The subclass always gets all the methods from the parent class(es) by default. If you wish to make emulate the behavior other languages use for privacy (eg the 'private' or 'protected' modifiers in C#) you have two options:
1) Python convention (and it's just a convention) is that methods with a single leading underscore in their names are not designed for access from outside the defining class.
2) Names with a double leading underscore are mangled in the bytecode so they aren't visible to other code under their own names. ParentClass.__private is visible inside ParentClass, but can only be accessed from outside ParentClass as ParentClass._ParentClass__private. (Great explanations here). Nothing in Python is truly private ;)
To override an inherited method just define the same name in a derived class. To call the parent class method inside the derived class you can do it as you did in your example, or using super:
def algorithm2(self, *args, **kwargs):
super(graph, self).algorithm2(self, *args, **kwargs)
# do other derived stuff here....
self.refresh()
This is ugly, but I think it does what you want, but without inheritance:
class DoAfter(object):
def __init__(self, obj, func):
self.obj = obj
self.func = func
def __getattribute__(self, attr, *a, **kw):
obj = object.__getattribute__(self, 'obj')
if attr in dir(obj):
x = getattr(obj, attr)
if callable(x):
def b(*a, **kw):
retval = x(*a, **kw)
self.func()
return retval
return b
else:
return x
else:
return object.__getattribute__(self, attr)
Use it like this:
>>> class A(object):
... def __init__(self):
... self.a = 1
...
... def boo(self, c):
... self.a += c
... return self.a
>>> def do_something():
... print 'a'
>>> a = A()
>>> print a.boo(1)
2
>>> print a.boo(2)
4
>>> b = DoAfter(a, do_something)
>>> print b.boo(1)
a
5
>>> print b.boo(2)
a
7
A increments a counter each time A.boo is called. DoAfter wraps A, so that any method in the instance a can be called as if it were a member of b. Note that every method is wrapped this way, so do_something() is called whenever a method is accessed.
This is barely tested, not recommended, and probably a bad idea. But, I think it does what you asked for.
EDIT: to do this with inheritance:
class graph(algorithms):
def refreshgraph(self):
print 'refreshgraph'
def __getattribute__(self, attr):
if attr in dir(algorithms):
x = algorithms.__getattribute__(self, attr)
if callable(x):
def wrapped(*a, **kw):
retval = x(*a, **kw)
self.refreshgraph()
return retval
return wrapped
else:
return x
else:
return object.__getattribute__(self, attr)
I have python class trees, each made up of an abstract base class and many deriving concrete classes. I want all concrete classes to be accessible through a base-class method, and I do not want to specify anything during child-class creation.
This is what my imagined solution looks like:
class BaseClassA(object):
# <some magic code around here>
#classmethod
def getConcreteClasses(cls):
# <some magic related code here>
class ConcreteClassA1(BaseClassA):
# no magic-related code here
class ConcreteClassA2(BaseClassA):
# no magic-related code here
As much as possible, I'd prefer to write the "magic" once as a sort of design pattern. I want to be able to apply it to different class trees in different scenarios (i.e. add a similar tree with "BaseClassB" and its concrete classes).
Thanks Internet!
you can use meta classes for that:
class AutoRegister(type):
def __new__(mcs, name, bases, classdict):
new_cls = type.__new__(mcs, name, bases, classdict)
#print mcs, name, bases, classdict
for b in bases:
if hasattr(b, 'register_subclass'):
b.register_subclass(new_cls)
return new_cls
class AbstractClassA(object):
__metaclass__ = AutoRegister
_subclasses = []
#classmethod
def register_subclass(klass, cls):
klass._subclasses.append(cls)
#classmethod
def get_concrete_classes(klass):
return klass._subclasses
class ConcreteClassA1(AbstractClassA):
pass
class ConcreteClassA2(AbstractClassA):
pass
class ConcreteClassA3(ConcreteClassA2):
pass
print AbstractClassA.get_concrete_classes()
I'm personnaly very wary of this kind of magic. Don't put too much of this in your code.
Here is a simple solution using modern python's (3.6+) __init__subclass__ defined in PEP 487. It allows you to avoid using a meta-class.
class BaseClassA(object):
_subclasses = []
#classmethod
def get_concrete_classes(cls):
return list(cls._subclasses)
def __init_subclass__(cls):
BaseClassA._subclasses.append(cls)
class ConcreteClassA1(BaseClassA):
pass # no magic-related code here
class ConcreteClassA2(BaseClassA):
pass # no magic-related code here
print(BaseClassA.get_concrete_classes())
You should know that part of the answer you're looking for is built-in. New-style classes automatically keep a weak reference to all of their child classes which can be accessed with the __subclasses__ method:
#classmethod
def getConcreteClasses(cls):
return cls.__subclasses__()
This won't return sub-sub-classes. If you need those, you can create a recursive generator to get them all:
#classmethod
def getConcreteClasses(cls):
for c in cls.__subclasses__():
yield c
for c2 in c.getConcreteClasses():
yield c2
Another way to do this, with a decorator, if your subclasses are either not defining __init__ or are calling their parent's __init__:
def lister(cls):
cls.classes = list()
cls._init = cls.__init__
def init(self, *args, **kwargs):
cls = self.__class__
if cls not in cls.classes:
cls.classes.append(cls)
cls._init(self, *args, **kwargs)
cls.__init__ = init
#classmethod
def getclasses(cls):
return cls.classes
cls.getclasses = getclasses
return cls
#lister
class A(object): pass
class B(A): pass
class C(A):
def __init__(self):
super(C, self).__init__()
b = B()
c = C()
c2 = C()
print 'Classes:', c.getclasses()
It will work whether or not the base class defines __init__.
I have a class that knows its existing instances. Sometimes I want the class constructor to return an existing object instead of creating a new one.
class X:
def __new__(cls, arg):
i = f(arg)
if i:
return X._registry[i]
else:
return object.__new__(cls)
# more stuff here (such as __init_, _registry, etc.)
Of course, if the first branch is executed, I don't need __init__, but it's invoked anyways. What's a good way to tell __init__ to do nothing?
I can probably just add some attribute to keep track of whether __init__ has run yet, but perhaps there's a better way?
In languages that support private constructors (C#, Dart, Scala, etc), factory methods provide a robust solution to this problem.
In Python, however, class constructors are always accessible, and so a user of your class may easily forget the factory method and call the constructor directly, producing duplicate copies of objects that should be unique.
A fool-proof solution to this problem can be achieved using a metaclass. The example below assumes that the zeroth constructor argument can be used to uniquely identify each instance:
class Unique(type):
def __call__(cls, *args, **kwargs):
if args[0] not in cls._cache:
self = cls.__new__(cls, *args, **kwargs)
cls.__init__(self, *args, **kwargs)
cls._cache[args[0]] = self
return cls._cache[args[0]]
def __init__(cls, name, bases, attributes):
super().__init__(name, bases, attributes)
cls._cache = {}
It can be used as follows:
class Country(metaclass=Unique):
def __init__(self, name: str, population: float, nationalDish: str):
self.name = name
self.population = population
self.nationalDish = nationalDish
placeA = Country("Netherlands", 16.8e6, "Stamppot")
placeB = Country("Yemen", 24.41e6, "Saltah")
placeC = Country("Netherlands", 11, "Children's tears")
print(placeA is placeB) # -> False
print(placeA is placeC) # -> True
print(placeC.nationalDish) # -> Stamppot
In summary, this approach is useful if you want to produce a set of unique objects at runtime (possibly using data in which entries may be repeated).
Use a factory, i.e.
_x_singleton = None
def XFactory():
global _x_singleton
if _x_singleton is None:
_x_singleton = X()
return _x_singleton
or use a "create" classmethod in your class that behaves the way you want it to,
class X(object):
instance = None
def __init__(self):
# ...
#classmethod
def create(cls):
if cls.instance is None:
cls.instance = cls()
return cls.instance
You might even consider making __init__ raise an exception if some condition isn't met (i.e. self.instance not being None)