Python create instance of derived class - python

I want to have abstract class Task and some derived classes like TaskA, TaskB, ...
I need static method in Task fetching all the tasks and returning list of them. But problem is that I have to fetch every task differently. I want Task to be universal so when I create new class for example TaskC it should work without changing class Task. Which design pattern should I use?
Let's say every derived Task will have decorator with its unique id, I am looking for function that would find class by id and create instance of it. How to do it in python?

There are a couple of ways you could achieve this.
the first and most simple is using the __new__ method as a factory to decide what subclass should be returned.
class Base:
UUID = "0"
def __new__(cls, *args, **kwargs):
if args == "some condition":
return A(*args, **kwargs)
elif args == "another condition":
return B(*args, **kwargs)
class A(Base):
UUID = "1"
class B(Base):
UUID = "2"
instance = Base("some", "args", "for", "the", condition=True)
in this example, if you wanted to make sure that the class is selected by uuid. you can replace the if condition to read something like
if a.UUID == "an argument you passed":
return A
but it's not really useful. since you have knowledge of the specific UUID, you might as well not bother going through the interface.
since I don't know what you want the decorator for, I can't think of a way to integrate it.
EDIT TO ADDRESS THE NOTE:
you don't need to have update it every time, if you do your expressions smartly.
let's say that the defining factor comes from a config file, that says "use class B"
for sub_classs in self.__subclasses__():
if sub_class.UUID == config.uuid:
return sub_class(*args, **kwargs) # make an instance and return it
the problem with that is that uuid is not useful to us as people. it would be easier to understand if instead we used a config.name to replace every place we have uuid in the example

I was fighting with this a lot of time and this is exactly what I wanted:
def class_id(id:int):
def func(cls):
cls.class_id = lambda : id
return cls
return func
def find_subclass_by_id(cls:type, id:int) -> type:
for t in cls.__subclasses__():
if getattr(t, "class_id")() == id:
return t
def get_class_id(obj)->int:
return getattr(type(obj), "class_id")()
class Task():
def load(self, dict:Dict) -> None:
pass
#staticmethod
def from_dict(dict:Dict) -> 'Task':
task_type = int(dict['task_type'])
t = find_subclass_by_id(Task, task_type)
obj:Task = t()
obj.load(dict)
return obj
#staticmethod
def fetch(filter: Dict):
return [Task.from_dict(doc) for doc in list_of_dicts]
#class_id(1)
class TaskA(Task):
def load(self, dict:Dict) -> None:
...
...

Related

How would I 'listen' to/decorate a setter from an imported class

I'm not sure whether this is a great approach to be using, but I'm not hugely experienced with Python so please accept my apologies. I've tried to do some research on this but other related questions have been given alternative problem-specific solutions - none of which apply to my specific case.
I have a class that handles the training/querying of my specific machine learning model. This algorithm is running on a remote sensor, various values are fed into the object which returns None if the algorithm isn't trained. Once trained, it returns either True or False depending on the classification assigned to new inputs. Occasionally, the class updates a couple of threshold parameters and I need to know when this occurs.
I am using sockets to pass messages from the remote sensor to my main server. I didn't want to complicate the ML algorithm class by filling it up with message passing code and so instead I've been handling this in a Main class that imports the "algorithm" class. I want the Main class to be able to determine when the threshold parameters are updated and report this back to the server.
class MyAlgorithmClass:
def feed_value(self):
....
class Main:
def __init__(self):
self._algorithm_data = MyAlgorithmClass()
self._sensor_data_queue = Queue()
def process_data(self):
while True:
sensor_value = self._sensor_data_queue.get()
result, value = self._algorithm_data.feed_value(sensor_value)
if result is None:
# value represents % training complete
self._socket.emit('training', value)
elif result is True:
# value represents % chance that input is categoryA
self._socket.emit('categoryA', value)
elif result is False:
...
My initial idea was to add a property to MyAlgorithmClass with a setter. I could then decorate this in my Main class so that every time the setter is called, I can use the value... for example:
class MyAlgorithmClass:
#property
def param1(self):
return self._param1
#param1.setter
def param1(self, value):
self._param1 = value
class Main:
def __init__(self):
self._algorithm_data = MyAlgorithmClass()
self._sensor_data_queue = Queue()
def watch_param1(func):
def inner(*args):
self._socket.emit('param1_updated', *args)
func(*args)
My problem now, is how do I decorate the self._algorithm_data.param1 setter with watch_param1? If I simply set self._algorithm_data.param1 = watch_param1 then I will just end up setting self._algorithm_data._param1 equal to my function which isn't what I want to do.
I could use getter/setter methods instead of a property, but this isn't very pythonic and as multiple people are modifying this code, I don't want the methods to be replaced/changed for properties by somebody else later on.
What is the best approach here? This is a small example but I will have slightly more complex examples of this later on and I don't want something that will cause overcomplication of the algorithm class. Obviously, another option is the Observer pattern but I'm not sure how appropriate it is here where I only have a single variable to monitor in some cases.
I'm really struggling to get a good solution put together so any advice would be much appreciated.
Thanks in advance,
Tom
Use descriptors. They let you customize attribute lookup, storage, and deletion in Python.
A simplified toy version of your code with descriptors looks something like:
class WatchedParam:
def __init__(self, name):
self.name = name
def __get__(self, instance, insttype=None):
print(f"{self.name} : value accessed")
return getattr(instance, '_' + self.name)
def __set__(self, instance, new_val):
print(f"{self.name} : value set")
setattr(instance, '_' + self.name, new_val)
class MyAlgorithmClass:
param1 = WatchedParam("param1")
param2 = WatchedParam("param2")
def __init__(self, param1, param2, param3):
self.param1 = param1
self.param2 = param2
self.param3 = param3
class Main:
def __init__(self):
self._data = MyAlgorithmClass(10, 20, 50)
m = Main()
m._data.param1 # calls WatchedParam.__get__
m._data.param2 = 100 # calls WatchedParam.__set__
The WatchedParam class is a descriptor and can be used in MyAlgorithmClass to specify the parameters that need to be monitored.
The solution I went for is as follows, using a 'Proxy' subclass which overrides the properties. Eventually, once I have a better understanding of the watched parameters, I won't need to watch them anymore. At this point I will be able to swap out the Proxy for the base class and continue using the code as normal.
class MyAlgorithmClassProxy(MyAlgorithmClass):
#property
def watch_param1(self):
return MyAlgorithmClass.watch_param1.fget(self)
#watch_param1.setter
def watch_param1(self, value):
self._socket.emit('param1_updated', *args)
MyAlgorithmClass.watch_param1.fset(self, value)

Subclassing method decorators in python

I am having trouble thinking of a way that's good python and consistent with oop principles as I've been taught to figure out how to create a family of related method decorators in python.
The mutually inconsistent goals seem to be that I want to be able to access both decorator attributes AND attributes of the instance on which the decorated method is bound. Here's what I mean:
from functools import wraps
class AbstractDecorator(object):
"""
This seems like the more natural way, but won't work
because the instance to which the wrapped function
is attached will never be in scope.
"""
def __new__(cls,f,*args,**kwargs):
return wraps(f)(object.__new__(cls,*args,**kwargs))
def __init__(decorator_self, f):
decorator_self.f = f
decorator_self.punctuation = "..."
def __call__(decorator_self, *args, **kwargs):
decorator_self.very_important_prep()
return decorator_self.f(decorator_self, *args, **kwargs)
class SillyDecorator(AbstractDecorator):
def very_important_prep(decorator_self):
print "My apartment was infested with koalas%s"%(decorator_self.punctuation)
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#SillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
if __name__ == "__main__":
u = UsefulObject("balloons")
u.red()
which of course produces
My apartment was infested with koalas...
AttributeError: 'SillyDecorator' object has no attribute 'noun'
Note that of course there is always a way to get this to work. A factory with enough arguments, for example, will let me attach methods to some created instance of SillyDecorator, but I was kind of wondering whether there is a reasonable way to do this with inheritance.
#miku got the key idea of using the descriptor protocol. Here is a refinement that keeps the decorator object separate from the "useful object" -- it doesn't store the decorator info on the underlying object.
class AbstractDecorator(object):
"""
This seems like the more natural way, but won't work
because the instance to which the wrapped function
is attached will never be in scope.
"""
def __new__(cls,f,*args,**kwargs):
return wraps(f)(object.__new__(cls,*args,**kwargs))
def __init__(decorator_self, f):
decorator_self.f = f
decorator_self.punctuation = "..."
def __call__(decorator_self, obj_self, *args, **kwargs):
decorator_self.very_important_prep()
return decorator_self.f(obj_self, *args, **kwargs)
def __get__(decorator_self, obj_self, objtype):
return functools.partial(decorator_self.__call__, obj_self)
class SillyDecorator(AbstractDecorator):
def very_important_prep(decorator_self):
print "My apartment was infested with koalas%s"%(decorator_self.punctuation)
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#SillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
My apartment was infested with koalas...
red balloons
The descriptor protocol is the key here, since it is the thing that gives you access to both the decorated method and the object on which it is bound. Inside __get__, you can extract the useful object identity (obj_self) and pass it on to the __call__ method.
Note that it's important to use functools.partial (or some such mechanism) rather than simply storing obj_self as an attribute of decorator_self. Since the decorated method is on the class, only one instance of SillyDecorator exists. You can't use this SillyDecorator instance to store useful-object-instance-specific information --- that would lead to strange errors if you created multiple UsefulObjects and accessed their decorated methods without immediately calling them.
It's worth pointing out, though, that there may be an easier way. In your example, you're only storing a small amount of information in the decorator, and you don't need to change it later. If that's the case, it might be simpler to just use a decorator-maker function: a function that takes an argument (or arguments) and returns a decorator, whose behavior can then depend on those arguments. Here's an example:
def decoMaker(msg):
def deco(func):
#wraps(func)
def wrapper(*args, **kwargs):
print msg
return func(*args, **kwargs)
return wrapper
return deco
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#decoMaker('koalas...')
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
koalas...
red balloons
You can use the decoMaker ahead of time to make a decorator to reuse later, if you don't want to retype the message every time you make the decorator:
sillyDecorator = decoMaker("Some really long message about koalas that you don't want to type over and over")
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#sillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
Some really long message about koalas that you don't want to type over and over
red balloons
You can see that this is much less verbose than writing a whole class inheritance tree for different kinds of decoratorts. Unless you're writing super-complicated decorators that store all sorts of internal state (which is likely to get confusing anyway), this decorator-maker approach might be an easier way to go.
Adapted from http://metapython.blogspot.de/2010/11/python-instance-methods-how-are-they.html. Note that this variant sets attributes on the target instance, hence, without checks, it is possible to overwrite target instance attributes. The code below does not contain any checks for this case.
Also note that this example sets the punctuation attribute explicitly; a more general class could auto-discover it's attributes.
from types import MethodType
class AbstractDecorator(object):
"""Designed to work as function or method decorator """
def __init__(self, function):
self.func = function
self.punctuation = '...'
def __call__(self, *args, **kw):
self.setup()
return self.func(*args, **kw)
def __get__(self, instance, owner):
# TODO: protect against 'overwrites'
setattr(instance, 'punctuation', self.punctuation)
return MethodType(self, instance, owner)
class SillyDecorator(AbstractDecorator):
def setup(self):
print('[setup] silly init %s' % self.punctuation)
class UsefulObject(object):
def __init__(self, noun='cat'):
self.noun = noun
#SillyDecorator
def d(self):
print('Hello %s %s' % (self.noun, self.punctuation))
obj = UsefulObject()
obj.d()
# [setup] silly init ...
# Hello cat ...

Python decorator to limit a method to a particular class?

I've got a large library of Django apps that are shared by a handful of Django projects/sites. Within each project/site there is an option to define a 'Mix In' class that will be mixed in to one of the in-library base classes (which many models sub-class from).
For this example let's say the in-library base class is PermalinkBase and the mix-in class is ProjectPermalinkBaseMixIn.
Because so many models subclass from PermalinkBase, not all the methods/properities defined in ProjectPermalinkBaseMixIn will be utilitized by all of PermalinkBase's subclasses.
I'd like to write a decorator that can be applied to methods/properties within ProjectPermalinkBaseMixIn in order to limit them from running (or at least returning None) if they are accessed from a non-approved class.
Here's how I'm doing it now:
class ProjectPermalinkBaseMixIn(object):
"""
Project-specific Mix-In Class to `apps.base.models.PermalinkBase`
"""
def is_video_in_season(self, season):
# Ensure this only runs if it is being called from the video model
if self.__class__.__name__ != 'Video':
to_return = None
else:
videos_in_season = season.videos_in_this_season.all()
if self in list(videos_in_season):
to_return = True
else:
to_return False
return to_return
Here's how I'd like to do it:
class ProjectPermalinkBaseMixIn(object):
"""
Project-specific Mix-In Class to `apps.base.models.PermalinkBase`
"""
#limit_to_model('Video')
def is_video_in_season(self, season):
videos_in_season = season.videos_in_this_season.all()
if self in list(videos_in_season):
to_return = True
else:
to_return = False
return to_return
Is this possible with decorators? This answer helped me to better understand decorators but I couldn't figure out how to modify it to solve the problem I listed above.
Are decorators the right tool for this job? If so, how would I write the limit_to_model decorator function? If not, what would be the best way to approach this problem?
was looking at your problem and I think this might be an overcomplicated way to achieve what you are trying to do. However I wrote this bit of code:
def disallow_class(*klass_names):
def function_handler(fn):
def decorated(self, *args, **kwargs):
if self.__class__.__name__ in klass_names:
print "access denied to class: %s" % self.__class__.__name__
return None
return fn(self, *args, **kwargs)
return decorated
return function_handler
class MainClass(object):
#disallow_class('DisallowedClass', 'AnotherDisallowedClass')
def my_method(self, *args, **kwargs):
print "my_method running!! %s" % self
class DisallowedClass(MainClass): pass
class AnotherDisallowedClass(MainClass): pass
class AllowedClass(MainClass): pass
if __name__ == "__main__":
x = DisallowedClass()
y = AnotherDisallowedClass()
z = AllowedClass()
x.my_method()
y.my_method()
z.my_method()
If you run this bit of code on your command line the output will be something like:
access denied to class: DisallowedClass
access denied to class: AnotherDisallowedClass
my_method running!! <__main__.AllowedClass object at 0x7f2b7105ad50>
Regards

String construction using OOP and Proxy pattern

I find it very interesting the way how SQLAlchemy constructing query strings, eg:
(Session.query(model.User)
.filter(model.User.age > 18)
.order_by(model.User.age)
.all())
As far as I can see, there applied some kind of Proxy Pattern. In my small project I need to make similar string construction using OOP approach. So, I tried to reconstitute this behavior.
Firstly, some kind of object, one of plenty similar objects:
class SomeObject(object):
items = None
def __init__(self):
self.items = []
def __call__(self):
return ' '.join(self.items) if self.items is not None else ''
def a(self):
self.items.append('a')
return self
def b(self):
self.items.append('b')
return self
All methods of this object return self, so I can call them in any order and unlimited number of times.
Secondly, proxy object, that will call subject's methods if it's not a perform method, which calls object to see the resulting string.
import operator
class Proxy(object):
def __init__(self, some_object):
self.some_object = some_object
def __getattr__(self, name):
self.method = operator.methodcaller(name)
return self
def __call__(self, *args, **kw):
self.some_object = self.method(self.some_object, *args, **kw)
return self
def perform(self):
return self.some_object()
And finally:
>>> obj = SomeObject()
>>> p = Proxy(obj)
>>> print p.a().a().b().perform()
a a b
What can you say about this implementation? Is there better ways to make the desirable amount of classes that would make such a string cunstructing with the same syntax?
PS: Sorry for my english, it's not my primary language.
Actually what you are looking at is not a proxy pattern but the builder pattern, and yes your implementation is IMHO is the classic one (using the Fluent interface pattern).
I don't know what SQLAlchemy does, but I would implement the interface by having the Session.query() method return a Query object with methods like filter(), order_by(), all() etc. Each of these methods simply returns a new Query object taking into account the applied changes. This allows for method chaining as in your first example.
Your own code example has numerous problems. One example
obj = SomeObject()
p = Proxy(obj)
a = p.a
b = p.b
print a().perform() # prints b

Python extension methods

OK, in C# we have something like:
public static string Destroy(this string s) {
return "";
}
So basically, when you have a string you can do:
str = "This is my string to be destroyed";
newstr = str.Destroy()
# instead of
newstr = Destroy(str)
Now this is cool because in my opinion it's more readable. Does Python have something similar? I mean instead of writing like this:
x = SomeClass()
div = x.getMyDiv()
span = x.FirstChild(x.FirstChild(div)) # so instead of this
I'd like to write:
span = div.FirstChild().FirstChild() # which is more readable to me
Any suggestion?
You can just modify the class directly, sometimes known as monkey patching.
def MyMethod(self):
return self + self
MyClass.MyMethod = MyMethod
del(MyMethod)#clean up namespace
I'm not 100% sure you can do this on a special class like str, but it's fine for your user-defined classes.
Update
You confirm in a comment my suspicion that this is not possible for a builtin like str. In which case I believe there is no analogue to C# extension methods for such classes.
Finally, the convenience of these methods, in both C# and Python, comes with an associated risk. Using these techniques can make code more complex to understand and maintain.
You can do what you have asked like the following:
def extension_method(self):
#do stuff
class.extension_method = extension_method
I would use the Adapter pattern here. So, let's say we have a Person class and in one specific place we would like to add some health-related methods.
from dataclasses import dataclass
#dataclass
class Person:
name: str
height: float # in meters
mass: float # in kg
class PersonMedicalAdapter:
person: Person
def __init__(self, person: Person):
self.person = person
def __getattr__(self, item):
return getattr(self.person, item)
def get_body_mass_index(self) -> float:
return self.person.mass / self.person.height ** 2
if __name__ == '__main__':
person = Person('John', height=1.7, mass=76)
person_adapter = PersonMedicalAdapter(person)
print(person_adapter.name) # Call to Person object field
print(person_adapter.get_body_mass_index()) # Call to wrapper object method
I consider it to be an easy-to-read, yet flexible and pythonic solution.
You can change the built-in classes by monkey-patching with the help of forbidden fruit
But installing forbidden fruit requires a C compiler and unrestricted environment so it probably will not work or needs hard effort to run on Google App Engine, Heroku, etc.
I changed the behaviour of unicode class in Python 2.7 for a Turkish i,I uppercase/lowercase problem by this library.
# -*- coding: utf8 -*-
# Redesigned by #guneysus
import __builtin__
from forbiddenfruit import curse
lcase_table = tuple(u'abcçdefgğhıijklmnoöprsştuüvyz')
ucase_table = tuple(u'ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZ')
def upper(data):
data = data.replace('i',u'İ')
data = data.replace(u'ı',u'I')
result = ''
for char in data:
try:
char_index = lcase_table.index(char)
ucase_char = ucase_table[char_index]
except:
ucase_char = char
result += ucase_char
return result
curse(__builtin__.unicode, 'upper', upper)
class unicode_tr(unicode):
"""For Backward compatibility"""
def __init__(self, arg):
super(unicode_tr, self).__init__(*args, **kwargs)
if __name__ == '__main__':
print u'istanbul'.upper()
You can achieve this nicely with the following context manager that adds the method to the class or object inside the context block and removes it afterwards:
class extension_method:
def __init__(self, obj, method):
method_name = method.__name__
setattr(obj, method_name, method)
self.obj = obj
self.method_name = method_name
def __enter__(self):
return self.obj
def __exit__(self, type, value, traceback):
# remove this if you want to keep the extension method after context exit
delattr(self.obj, self.method_name)
Usage is as follows:
class C:
pass
def get_class_name(self):
return self.__class__.__name__
with extension_method(C, get_class_name):
assert hasattr(C, 'get_class_name') # the method is added to C
c = C()
print(c.get_class_name()) # prints 'C'
assert not hasattr(C, 'get_class_name') # the method is gone from C
I'd like to think that extension methods in C# are pretty much the same as normal method call where you pass the instance then arguments and stuff.
instance.method(*args, **kwargs)
method(instance, *args, **kwargs) # pretty much the same as above, I don't see much benefit of it getting implemented in python.
After a week, I have a solution that is closest to what I was seeking for. The solution consists of using getattr and __getattr__. Here is an example for those who are interested.
class myClass:
def __init__(self): pass
def __getattr__(self, attr):
try:
methodToCall = getattr(myClass, attr)
return methodToCall(myClass(), self)
except:
pass
def firstChild(self, node):
# bla bla bla
def lastChild(self, node):
# bla bla bla
x = myClass()
div = x.getMYDiv()
y = div.firstChild.lastChild
I haven't test this example, I just gave it to give an idea for who might be interested. Hope that helps.
C# implemented extension methods because it lacks first class functions, Python has them and it is the preferred method for "wrapping" common functionality across disparate classes in Python.
There are good reasons to believe Python will never have extension methods, simply look at the available built-ins:
len(o) calls o.__len__
iter(o) calls o.__iter__
next(o) calls o.next
format(o, s) calls o.__format__(s)
Basically, Python likes functions.

Categories