I'm building my own lightweight orm. I'd like to keep instantiated objects in a class variable (perhaps a dictionary). When I request an object (through a class method) like get(id), I'd like to return the object from the instantiated list, or create one if it does not exist.
Is there a way to prevent the instantiation of an object (if its id already exists in the cls list)?
There are two straightforward ways of doing it - and many other ways,as well. One of them, as you suggest, is to write the __new__ method for your objects, which could return an already existing object or create a new instance.
Another way is to use a factory function for your objects - and call this factory function instead of the class - more or less like this:
class _MyClass(object):
pass
def MyClass(*args, **kw):
if not "my_class_object" in all_objects:
all_objects["my_class_object"] = _MyClass(*args, **kw)
return all_objects["my_class_object"]
Just perform explicit check, it is the cleanest method, I believe:
class OrmContainer(object):
objects = {}
#classmethod
def get(cls, id):
if id not in cls.objects:
cls.objects[id] = SomeOtherClass(id)
return cls.objects[id]
Related
I have a class
class A:
def sample_method():
I would like to decorate class A sample_method() and override the contents of sample_method()
class DecoratedA(A):
def sample_method():
The setup above resembles inheritance, but I need to keep the preexisting instance of class A when the decorated function is used.
a # preexisting instance of class A
decorated_a = DecoratedA(a)
decorated_a.functionInClassA() #functions in Class A called as usual with preexisting instance
decorated_a.sample_method() #should call the overwritten sample_method() defined in DecoratedA
What is the proper way to go about this?
There isn't a straightforward way to do what you're asking. Generally, after an instance has been created, it's too late to mess with the methods its class defines.
There are two options you have, as far as I see it. Either you create a wrapper or proxy object for your pre-existing instance, or you modify the instance to change its behavior.
A proxy defers most behavior to the object itself, while only adding (or overriding) some limited behavior of its own:
class Proxy:
def __init__(self, obj):
self.obj = obj
def overridden_method(self): # add your own limited behavior for a few things
do_stuff()
def __getattr__(self, name): # and hand everything else off to the other object
return getattr(self.obj, name)
__getattr__ isn't perfect here, it can only work for regular methods, not special __dunder__ methods that are often looked up directly in the class itself. If you want your proxy to match all possible behavior, you probably need to add things like __add__ and __getitem__, but that might not be necessary in your specific situation (it depends on what A does).
As for changing the behavior of the existing object, one approach is to write your subclass, and then change the existing object's class to be the subclass. This is a little sketchy, since you won't have ever initialized the object as the new class, but it might work if you're only modifying method behavior.
class ModifiedA(A):
def overridden_method(self): # do the override in a normal subclass
do_stuff()
def modify_obj(obj): # then change an existing object's type in place!
obj.__class__ = ModifiedA # this is not terribly safe, but it can work
You could also consider adding an instance variable that would shadow the method you want to override, rather than modifying __class__. Writing the function could be a little tricky, since it won't get bound to the object automatically when called (that only happens for functions that are attributes of a class, not attributes of an instance), but you could probably do the binding yourself (with partial or lambda if you need to access self.
First, why not just define it from the beginning, how you want it, instead of decorating it?
Second, why not decorate the method itself?
To answer the question:
You can reassign it
class A:
def sample_method(): ...
pass
A.sample_method = DecoratedA.sample_method;
but that affects every instance.
Another solution is to reassign the method for just one object.
import functools;
a.sample_method = functools.partial(DecoratedA.sample_method, a);
Another solution is to (temporarily) change the type of an existing object.
a = A();
a.__class__ = DecoratedA;
a.sample_method();
a.__class__ = A;
Is there are a way in Python to store instantiated class as a class 'template' (aka promote object to a class) to create new objects of same type with same fields values, without relying on using data that was used to create original object again or on copy.deepcopy?
Like, for example I have the dictionary:
valid_date = {"date":"30 february"} # dict could have multiple items
and I have the class:
class AwesomeDate:
def __init__(self, dates_dict):
for key, val in dates_dict.items():
setattr(self, key, val);
I create the instance of the class like:
totally_valid_date = AwesomeDate(valid_date)
print(totally_valid_date.date) # output: 30 february
and now I want to use it to create new instances of the AwesomeDate class using the totally_valid_date instance as a template, i.e. like:
how_make_it_work = totally_valid_date()
print(how_make_it_work.date) # should print: 30 february
Is there are way to do so or no? I need a generic solution, not a solution for this specific example.
I don't really see the benefit of having a class act both as a template to instances, and as the instance itself, both conceptually and coding-wise. In my opinion, you're better off using two different classes - one for the template, one for the objects it is able to create.
You can think about awesome_date as a template class that stores the valid_date attributes upon initialization. Once called, the template returns an instance of a different class that has the expected attributes.
Here's a simple implementation (names have been changed to generalize the idea):
class Thing:
pass
class Template:
def __init__(self, template_attrs):
self.template_attrs = template_attrs
def __call__(self):
instance = Thing()
for key, val in self.template_attrs.items():
setattr(instance, key, val)
return instance
attrs = {'date': '30 february'}
template = Template(template_attrs=attrs)
# Gets instance of Thing
print(template()) # output: <__main__.Thing object at 0x7ffa656f8668>
# Gets another instance of Thing and accesses the date attribute
print(template().date) # output: 30 february
Yes, there are ways to do it -
there could even be some tweaking of inheriting from type and meddling with __call__ to make all instances automatically become derived classes. But I don't think that would be very sane. Python's own enum.Enum does something along this, because it has some use for the enum values - but the price is it became hard to understand beyond the basic usage, even for seasoned Pythonistas.
However, having a custom __init_subclass__ method that can inject some code to run prior to __init__ on the derived class, and then a method that will return a new class bound with the data that the new classes should have, can suffice:
import copy
from functools import wraps
def wrap_init(init):
#wraps(init)
def wrapper(self, *args, **kwargs):
if not getattr(self, "_initalized", False):
self.__dict__.update(self._template_data or {})
self._initialized = True
return init(self, *args, **kwargs)
wrapper._template_wrapper = True
return wrapper
class TemplateBase:
_template_data = None
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
if getattr(cls.__init__, "_template_wraper", False):
return
init = cls.__init__
cls.__init__ = wrap_init(init)
def as_class(self):
cls= self.__class__
new_cls = type(cls.__name__ + "_templated", (cls,), {})
new_cls._template_data = copy.copy(self.__dict__)
return new_cls
And using it:
class AwesomeDate(TemplateBase):
def __init__(self, dates_dict):
for key, val in dates_dict.items():
setattr(self, key, val)
On the REPL we have:
In [34]: x = AwesomeDate({"x":1, "y":2})
In [35]: Y = x.as_class()
In [36]: y = Y({})
In [37]: y.x
Out[37]: 1
Actually, __init_subclass__ itself could be supressed, and decorating __init__ could be done in one shot on the as_class method. This code takes some care so that mixin classes can be used, and it will still work.
It seems like you are going for something along the lines of the prototype design pattern.
What is the prototype design pattern?
From Wikipedia: Prototype pattern
The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to avoid subclasses of an object creator in the client application, like the factory method pattern does and to avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
From Refactoring.guru: Prototype
Prototype is a creational design pattern that lets you copy existing objects without making your code dependent on their classes. The Prototype pattern delegates the cloning process to the actual objects that are being cloned. The pattern declares a common interface for all objects that support cloning. This interface lets you clone an object without coupling your code to the class of that object. Usually, such an interface contains just a single clone method.
The implementation of the clone method is very similar in all classes. The method creates an object of the current class and carries over all of the field values of the old object into the new one. You can even copy private fields because most programming languages let objects access private fields of other objects that belong to the same class. An object that supports cloning is called a prototype. When your objects have dozens of fields and hundreds of possible configurations, cloning them might serve as an alternative to subclassing. Here’s how it works: you create a set of objects, configured in various ways. When you need an object like the one you’ve configured, you just clone a prototype instead of constructing a new object from scratch.
Implementing this for your problem, along with your other ideas
From your explanation, it seems like you want to:
Provide a variable containing a dictionary, which will be passed to the __init__ of some class Foo
Instantiate class Foo and pass the variable containing the dictionary as an argument.
Implement __call__ onto class Foo, allowing us to use the function call syntax on an object of class Foo.
The implementation of __call__ will COPY/CLONE the “template” object. We can then do whatever we want with this copied/cloned instance.
The Code (edited)
import copy
class Foo:
def __init__(self, *, template_attrs):
if not isinstance(template_attrs, dict):
raise TypeError("You must pass a dict to instantiate this class.")
self.template_attrs = template_attrs
def __call__(self):
return copy.copy(self)
def __repr__(self):
return f"{self.template_attrs}"
def __setitem__(self, key, value):
self.template_attrs[key] = value
def __getitem__(self, key):
if key not in self.template_attrs:
raise KeyError(f"Key {key} does not exist in '{self.template_attrs=}'.")
return self.template_attrs[key]
err = Foo(template_attrs=1) # Output: TypeError: You must pass a dict to instantiate this class.
# remove err's assignment to have code under it run
base = Foo(template_attrs={1: 2})
print(f"{base=}") # Output: base={1: 2}
base_copy = base()
base_copy["hello"] = "bye"
print(f"{base_copy=}") # Output: base_copy={1: 2, 'hello': 'bye'}
print(f"{base_copy[1]=}") # Output: base_copy[1]=2
print(f"{base_copy[10]=}") # Output: KeyError: "Key 10 does not exist in 'self.template_attrs={1: 2, 'hello': 'bye'}'."
I also added support for subscripting and item assignment through __getitem__ and __setitem__ respectively. I hope that this helped a bit with your problem! Feel free to comment on this if I missed what you were asking.
Reasons for edits (May 16th, 2022 at 8:49 PM CST | Approx. 9 hours after original answer)
Fix code based on suggestions by comment from user jsbueno
Handle, in __getitem__, if an instance of class Foo is subscripted with a key that doesn't exist in the dict.
Handle, in __init__, if the type of template_attrs isn't dict (did this based on the fact that you used a dictionary in the body of your question)
I have a class sysprops in which I'd like to have a number of constants. However, I'd like to pull the values for those constants from the database, so I'd like some sort of hook any time one of these class constants are accessed (something like the getattribute method for instance variables).
class sysprops(object):
SOME_CONSTANT = 'SOME_VALUE'
sysprops.SOME_CONSTANT # this statement would not return 'SOME_VALUE' but instead a dynamic value pulled from the database.
Although I think it is a very bad idea to do this, it is possible:
class GetAttributeMetaClass(type):
def __getattribute__(self, key):
print 'Getting attribute', key
class sysprops(object):
__metaclass__ = GetAttributeMetaClass
While the other two answers have a valid method. I like to take the route of 'least-magic'.
You can do something similar to the metaclass approach without actually using them. Simply by using a decorator.
def instancer(cls):
return cls()
#instancer
class SysProps(object):
def __getattribute__(self, key):
return key # dummy
This will create an instance of SysProps and then assign it back to the SysProps name. Effectively shadowing the actual class definition and allowing a constant instance.
Since decorators are more common in Python I find this way easier to grasp for other people that have to read your code.
sysprops.SOME_CONSTANT can be the return value of a function if SOME_CONSTANT were a property defined on type(sysprops).
In other words, what you are talking about is commonly done if sysprops were an instance instead of a class.
But here is the kicker -- classes are instances of metaclasses. So everything you know about controlling the behavior of instances through the use of classes applies equally well to controlling the behavior of classes through the use of metaclasses.
Usually the metaclass is type, but you are free to define other metaclasses by subclassing type. If you place a property SOME_CONSTANT in the metaclass, then the instance of that metaclass, e.g. sysprops will have the desired behavior when Python evaluates sysprops.SOME_CONSTANT.
class MetaSysProps(type):
#property
def SOME_CONSTANT(cls):
return 'SOME_VALUE'
class SysProps(object):
__metaclass__ = MetaSysProps
print(SysProps.SOME_CONSTANT)
yields
SOME_VALUE
I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.
This is going to be difficult to explain, but what I'm trying to do is create a Base object to base other objects on. The Base class handles shared tasks so that subclasses don't need to keep implementing them. However, I also need a static/class method which creates instances of the classes. So for example, this is my Base class:
class Base(object):
def __init__(self, service, reference, vo=None):
self.service = service
self.reference = reference
self.id = reference.getId()
self.name = reference.getName()
# do a database lookup here to get more information
#staticmethod
def get_objects(service, references, as_dict=False):
"""
More efficient way to get multiple objects at once. Requires the service
instance and a list or tuple of references.
"""
vo_list = database.get_objects_from_references(references)
items = list()
for vo in vo_list:
items.append(Base(service, vo.getRef(), vo))
return items
The get_objects() method will take a list of ID numbers of entries stored in a database, then get all those objects and make objects out of them in one shot instead of hitting the database for each ID. The problem I'm having is that I have to use Base() in that method to instantiate the class. But this instantiates the Base class, not the subclass:
class Something(Base):
def __init__(self, service, reference, vo=None):
Base.__init__(self, service, reference, vo)
do_extra_stuff()
My problem is I don't know if I do this:
Something.get_objects(service, references)
Will that just run Base's init() method, or will it run the subclass's init() method (and the do_extra_stuff() method)?
You want a class method instead, that will get the class object as its first parameter so that you can build an instance of that specific class:
#classmethod
def get_objects(cls, service, references, as_dict=False):
"""
More efficient way to get multiple objects at once. Requires the service
instance and a list or tuple of references.
"""
vo_list = database.get_objects_from_references(references)
items = list()
for vo in vo_list:
items.append(cls(service, vo.getRef(), vo))
return items