I have been reading documentation describing class inheritance, abstract base classes and even python interfaces. But nothing seams to be exactly what I want. Namely, a simple way of building virtual classes. When the virtual class gets called, I would like it to instantiate some more specific class based on what the parameters it is given and hand that back the calling function. For now I have a summary way of rerouting calls to the virtual class down to the underlying class.
The idea is the following:
class Shape:
def __init__(self, description):
if description == "It's flat": self.underlying_class = Line(description)
elif description == "It's spiky": self.underlying_class = Triangle(description)
elif description == "It's big": self.underlying_class = Rectangle(description)
def number_of_edges(self, parameters):
return self.underlying_class(parameters)
class Line:
def __init__(self, description):
self.desc = description
def number_of_edges(self, parameters):
return 1
class Triangle:
def __init__(self, description):
self.desc = description
def number_of_edges(self, parameters):
return 3
class Rectangle:
def __init__(self, description):
self.desc = description
def number_of_edges(self, parameters):
return 4
shape_dont_know_what_it_is = Shape("It's big")
shape_dont_know_what_it_is.number_of_edges(parameters)
My rerouting is far from optimal, as only calls to the number_of_edges() function get passed on. Adding something like this to Shape doesn't seam to do the trick either:
def __getattr__(self, *args):
return underlying_class.__getattr__(*args)
What I am doing wrong ? Is the whole idea badly implemented ? Any help greatly appreciated.
I agree with TooAngel, but I'd use the __new__ method.
class Shape(object):
def __new__(cls, *args, **kwargs):
if cls is Shape: # <-- required because Line's
description, args = args[0], args[1:] # __new__ method is the
if description == "It's flat": # same as Shape's
new_cls = Line
else:
raise ValueError("Invalid description: {}.".format(description))
else:
new_cls = cls
return super(Shape, cls).__new__(new_cls, *args, **kwargs)
def number_of_edges(self):
return "A shape can have many edges…"
class Line(Shape):
def number_of_edges(self):
return 1
class SomeShape(Shape):
pass
>>> l1 = Shape("It's flat")
>>> l1.number_of_edges()
1
>>> l2 = Line()
>>> l2.number_of_edges()
1
>>> u = SomeShape()
>>> u.number_of_edges()
'A shape can have many edges…'
>>> s = Shape("Hexagon")
ValueError: Invalid description: Hexagon.
I would prefer doing it with a factory:
def factory(description):
if description == "It's flat": return Line(description)
elif description == "It's spiky": return Triangle(description)
elif description == "It's big": return Rectangle(description)
or:
def factory(description):
classDict = {"It's flat":Line("It's flat"), "It's spiky":Triangle("It's spiky"), "It's big":Rectangle("It's big")}
return classDict[description]
and inherit the classes from Shape
class Line(Shape):
def __init__(self, description):
self.desc = description
def number_of_edges(self, parameters):
return 1
Python doesn't have virtual classes out of the box. You will have to implement them yourself (it should be possible, Python's reflection capabilities should be powerful enough to let you do this).
However, if you need virtual classes, then why don't you just use a programming language which does have virtual classes like Beta, gBeta or Newspeak? (BTW: are there any others?)
In this particular case, though, I don't really see how virtual classes would simplify your solution, at least not in the example you have given. Maybe you could elaborate why you think you need virtual classes?
Don't get me wrong: I like virtual classes, but the fact that only three languages have ever implemented them, only one of those three is still alive and exactly 0 of those three are actually used by anybody is somewhat telling …
You can change the class with object.__class__, but it's much better to just make a function that returns an instance of an arbitrary class.
On another note, all class should inherit from object unless you use using Python 3, like this, otherwise you end up with an old-style class:
class A(object):
pass
Related
I know the title is probably a bit confusing, so let me give you an example. Suppose you have a base class Base which is intended to be subclassed to create more complex objects. But you also have optional functionality that you don't need for every subclass, so you put it in a secondary class OptionalStuffA that is always intended to be subclassed together with the base class. Should you also make that secondary class a subclass of Base?
This is of course only relevant if you have more than one OptionalStuff class and you want to combine them in different ways, because otherwise you don't need to subclass both Base and OptionalStuffA (and just have OptionalStuffA be a subclass of Base so you only need to subclass OptionalStuffA). I understand that it shouldn't make a difference for the MRO if Base is inherited from more than once, but I'm not sure if there are any drawbacks to making all the secondary classes inherit from Base.
Below is an example scenario. I've also thrown in the QObject class as a 'third party' token class whose functionality is necessary for one of the secondary classes to work. Where do I subclass it? The example below shows how I've done it so far, but I doubt this is the way to go.
from PyQt5.QtCore import QObject
class Base:
def __init__(self):
self._basic_stuff = None
def reset(self):
self._basic_stuff = None
class OptionalStuffA:
def __init__(self):
super().__init__()
self._optional_stuff_a = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._optional_stuff_a = None
def do_stuff_that_only_works_if_my_children_also_inherited_from_Base(self):
self._basic_stuff = not None
class OptionalStuffB:
def __init__(self):
super().__init__()
self._optional_stuff_b = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._optional_stuff_b = None
def do_stuff_that_only_works_if_my_children_also_inherited_from_QObject(self):
print(self.objectName())
class ClassThatIsActuallyUsed(Base, OptionalStuffA, OptionalStuffB, QObject):
def __init__(self):
super().__init__()
self._unique_stuff = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._unique_stuff = None
What I can get from your problem is that you want to have different functions and properties based on different condition, that sounds like good reason to use MetaClass.
It all depends how complex your each class is, and what are you building, if it is for some library or API then MetaClass can do magic if used rightly.
MetaClass is perfect to add functions and property to the class based on some sort of condition, you just have to add all your subclass function into one meta class and add that MetaClass to your main class
From Where to start
you can read about MetaClass here, or you can watch it here.
After you have better understanding about MetaClass see the source code of Django ModelForm from here and here, but before that take a brief look on how the Django Form works from outside this will give You an idea on how to implement it.
This is how I would implement it.
#You can also inherit it from other MetaClass but type has to be top of inheritance
class meta_class(type):
# create class based on condition
"""
msc: meta_class, behaves much like self (not exactly sure).
name: name of the new class (ClassThatIsActuallyUsed).
base: base of the new class (Base).
attrs: attrs of the new class (Meta,...).
"""
def __new__(mcs, name, bases, attrs):
meta = attrs.get('Meta')
if(meta.optionA){
attrs['reset'] = resetA
}if(meta.optionB){
attrs['reset'] = resetB
}if(meta.optionC){
attrs['reset'] = resetC
}
if("QObject" in bases){
attrs['do_stuff_that_only_works_if_my_children_also_inherited_from_QObject'] = functionA
}
return type(name, bases, attrs)
class Base(metaclass=meta_class): #you can also pass kwargs to metaclass here
#define some common functions here
class Meta:
# Set default values here for the class
optionA = False
optionB = False
optionC = False
class ClassThatIsActuallyUsed(Base):
class Meta:
optionA = True
# optionB is False by default
optionC = True
EDIT: Elaborated on how to implement MetaClass.
Let me start with another alternative. In the example below the Base.foo method is a plain identity function, but options can override that.
class Base:
def foo(self, x):
return x
class OptionDouble:
def foo(self, x):
x *= 2 # preprocess example
return super().foo(x)
class OptionHex:
def foo(self, x):
result = super().foo(x)
return hex(result) # postprocess example
class Combined(OptionDouble, OptionHex, Base):
pass
b = Base()
print(b.foo(10)) # 10
c = Combined()
print(c.foo(10)) # 2x10 = 20, as hex string: "0x14"
The key is that in the definition of the Combined's bases are Options specified before the Base:
class Combined(OptionDouble, OptionHex, Base):
Read the class names left-to right and in this simple case
this is the order in which foo() implementations are ordered.
It is called the method resolution order (MRO).
It also defines what exactly super() means in particular classes and that is important, because Options are written as wrappers around the super() implementation
If you do it the other way around, it won't work:
class Combined(Base, OptionDouble, OptionHex):
pass
c = Combined()
print(Combined.__mro__)
print(c.foo(10)) # 10, options not effective!
In this case the Base implementation is called first and it directly returns the result.
You could take care of the correct base order manually or you could write a function that checks it. It walks through the MRO list and once it sees the Base it will not allow an Option after it.
class Base:
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
base_seen = False
for mr in cls.__mro__:
if base_seen:
if issubclass(mr, Option):
raise TypeError( f"The order of {cls.__name__} base classes is incorrect")
elif mr is Base:
base_seen = True
def foo(self, x):
return x
class Option:
pass
class OptionDouble(Option):
...
class OptionHex(Option):
...
Now to answer your comment. I wrote that #wettler's approach could be simplified. I meant something like this:
class Base:
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
print("options for the class", cls.__name__)
print('A', cls.optionA)
print('B', cls.optionB)
print('C', cls.optionC)
# ... modify the class according to the options ...
bases = cls.__bases__
# ... check if QObject is present in bases ...
# defaults
optionA = False
optionB = False
optionC = False
class ClassThatIsActuallyUsed(Base):
optionA = True
optionC = True
This demo will print:
options for the class ClassThatIsActuallyUsed
A True
B False
C True
I want to have abstract class Task and some derived classes like TaskA, TaskB, ...
I need static method in Task fetching all the tasks and returning list of them. But problem is that I have to fetch every task differently. I want Task to be universal so when I create new class for example TaskC it should work without changing class Task. Which design pattern should I use?
Let's say every derived Task will have decorator with its unique id, I am looking for function that would find class by id and create instance of it. How to do it in python?
There are a couple of ways you could achieve this.
the first and most simple is using the __new__ method as a factory to decide what subclass should be returned.
class Base:
UUID = "0"
def __new__(cls, *args, **kwargs):
if args == "some condition":
return A(*args, **kwargs)
elif args == "another condition":
return B(*args, **kwargs)
class A(Base):
UUID = "1"
class B(Base):
UUID = "2"
instance = Base("some", "args", "for", "the", condition=True)
in this example, if you wanted to make sure that the class is selected by uuid. you can replace the if condition to read something like
if a.UUID == "an argument you passed":
return A
but it's not really useful. since you have knowledge of the specific UUID, you might as well not bother going through the interface.
since I don't know what you want the decorator for, I can't think of a way to integrate it.
EDIT TO ADDRESS THE NOTE:
you don't need to have update it every time, if you do your expressions smartly.
let's say that the defining factor comes from a config file, that says "use class B"
for sub_classs in self.__subclasses__():
if sub_class.UUID == config.uuid:
return sub_class(*args, **kwargs) # make an instance and return it
the problem with that is that uuid is not useful to us as people. it would be easier to understand if instead we used a config.name to replace every place we have uuid in the example
I was fighting with this a lot of time and this is exactly what I wanted:
def class_id(id:int):
def func(cls):
cls.class_id = lambda : id
return cls
return func
def find_subclass_by_id(cls:type, id:int) -> type:
for t in cls.__subclasses__():
if getattr(t, "class_id")() == id:
return t
def get_class_id(obj)->int:
return getattr(type(obj), "class_id")()
class Task():
def load(self, dict:Dict) -> None:
pass
#staticmethod
def from_dict(dict:Dict) -> 'Task':
task_type = int(dict['task_type'])
t = find_subclass_by_id(Task, task_type)
obj:Task = t()
obj.load(dict)
return obj
#staticmethod
def fetch(filter: Dict):
return [Task.from_dict(doc) for doc in list_of_dicts]
#class_id(1)
class TaskA(Task):
def load(self, dict:Dict) -> None:
...
...
I'm not sure whether this is a great approach to be using, but I'm not hugely experienced with Python so please accept my apologies. I've tried to do some research on this but other related questions have been given alternative problem-specific solutions - none of which apply to my specific case.
I have a class that handles the training/querying of my specific machine learning model. This algorithm is running on a remote sensor, various values are fed into the object which returns None if the algorithm isn't trained. Once trained, it returns either True or False depending on the classification assigned to new inputs. Occasionally, the class updates a couple of threshold parameters and I need to know when this occurs.
I am using sockets to pass messages from the remote sensor to my main server. I didn't want to complicate the ML algorithm class by filling it up with message passing code and so instead I've been handling this in a Main class that imports the "algorithm" class. I want the Main class to be able to determine when the threshold parameters are updated and report this back to the server.
class MyAlgorithmClass:
def feed_value(self):
....
class Main:
def __init__(self):
self._algorithm_data = MyAlgorithmClass()
self._sensor_data_queue = Queue()
def process_data(self):
while True:
sensor_value = self._sensor_data_queue.get()
result, value = self._algorithm_data.feed_value(sensor_value)
if result is None:
# value represents % training complete
self._socket.emit('training', value)
elif result is True:
# value represents % chance that input is categoryA
self._socket.emit('categoryA', value)
elif result is False:
...
My initial idea was to add a property to MyAlgorithmClass with a setter. I could then decorate this in my Main class so that every time the setter is called, I can use the value... for example:
class MyAlgorithmClass:
#property
def param1(self):
return self._param1
#param1.setter
def param1(self, value):
self._param1 = value
class Main:
def __init__(self):
self._algorithm_data = MyAlgorithmClass()
self._sensor_data_queue = Queue()
def watch_param1(func):
def inner(*args):
self._socket.emit('param1_updated', *args)
func(*args)
My problem now, is how do I decorate the self._algorithm_data.param1 setter with watch_param1? If I simply set self._algorithm_data.param1 = watch_param1 then I will just end up setting self._algorithm_data._param1 equal to my function which isn't what I want to do.
I could use getter/setter methods instead of a property, but this isn't very pythonic and as multiple people are modifying this code, I don't want the methods to be replaced/changed for properties by somebody else later on.
What is the best approach here? This is a small example but I will have slightly more complex examples of this later on and I don't want something that will cause overcomplication of the algorithm class. Obviously, another option is the Observer pattern but I'm not sure how appropriate it is here where I only have a single variable to monitor in some cases.
I'm really struggling to get a good solution put together so any advice would be much appreciated.
Thanks in advance,
Tom
Use descriptors. They let you customize attribute lookup, storage, and deletion in Python.
A simplified toy version of your code with descriptors looks something like:
class WatchedParam:
def __init__(self, name):
self.name = name
def __get__(self, instance, insttype=None):
print(f"{self.name} : value accessed")
return getattr(instance, '_' + self.name)
def __set__(self, instance, new_val):
print(f"{self.name} : value set")
setattr(instance, '_' + self.name, new_val)
class MyAlgorithmClass:
param1 = WatchedParam("param1")
param2 = WatchedParam("param2")
def __init__(self, param1, param2, param3):
self.param1 = param1
self.param2 = param2
self.param3 = param3
class Main:
def __init__(self):
self._data = MyAlgorithmClass(10, 20, 50)
m = Main()
m._data.param1 # calls WatchedParam.__get__
m._data.param2 = 100 # calls WatchedParam.__set__
The WatchedParam class is a descriptor and can be used in MyAlgorithmClass to specify the parameters that need to be monitored.
The solution I went for is as follows, using a 'Proxy' subclass which overrides the properties. Eventually, once I have a better understanding of the watched parameters, I won't need to watch them anymore. At this point I will be able to swap out the Proxy for the base class and continue using the code as normal.
class MyAlgorithmClassProxy(MyAlgorithmClass):
#property
def watch_param1(self):
return MyAlgorithmClass.watch_param1.fget(self)
#watch_param1.setter
def watch_param1(self, value):
self._socket.emit('param1_updated', *args)
MyAlgorithmClass.watch_param1.fset(self, value)
class Spam(object):
#a_string = 'candy'
def __init__(self, sold=0, cost=0):
self.sold = sold
self.cost = cost
#staticmethod
def total_cost():
return True
#classmethod
def items_sold(cls, how_many):
#property
def silly_walk(self):
return print (self.a_string)
#silly_walk.setter
def silly_walk(self, new_string):
self.a_string = new_string.upper()
def do_cost(self):
if self.total_cost():
print('Total cost is:', self.cost)
.
from spam import Spam
def main ():
cost = 25
sold = 100
a_string = 'sweets'
sp = Spam(100, 25)
sp.do_cost()
sw = Spam.silly_walk(a_string)
sw.silly_walk()
if __name__ == '__main__':
main()
so im new to python and i don't understand how to use the setters and getters in this. so what i want to do is:
use #property to create a setter and getter for a property named silly_walk. Have the setter upper case the silly_walk string.
Show example code that would access the static method.
Show example code that would use the silly_walk setter and getter.
im getting very confused with what "self" does in the class and im not sure if what im doing is correct
update:
problem was the #classmethod not having a return and indentation error, so everything is fixed thanks everybody
self is convention. Since you're inside a class, you don't have functions there you have methods. Methods expect a reference to the object calling them as the first argument, which by convention is named self. You can call it anything you like.
class Foo(object):
def __init__(itsa_me_maaaario, name):
itsa_me_maaario.name = "Mario"
That works just as well.
As for the rest of your code -- what's your QUESTION there? Looks like your setter is a bit weird, but other than that it should work mostly okay. This is better:
class Spam(object): # inherit from object in py2 for new-style classes
def __init__(self, a_string, sold=0, cost=0) # put the positional arg first
...
#staticmethod
def total_cost():
# you have to do something meaningful here. A static method can't access
# any of the objects attributes, it's really only included for grouping
# related functions to their classes.
#classmethod
def items_sold(cls, how_many):
# the first argument to a classmethod is the class, not the object, so
# by convention name it cls. Again this should be something relevant to
# the class not to the object.
#property
def silly_walk(self):
return self.a_string
# don't call itself.
#silly_walk.setter
def silly_walk(self, new_string):
self.a_string = new_string
# it really just hides the attribute.
For instance I have a class I built to abstract a computer system I'm in charge of. It might be something like:
class System(object):
type_ = "Base system"
def __init__(self, sitenum, devicenum, IP):
self._sitenum = sitenum
self._devicenum = devicenum
self._IP = IP
# the leading underscores are a flag to future coders that these are
# "private" variables. Nothing stopping someone from using it anyway,
# because System()._IP is still that attribute, but it makes it clear
# that they're not supposed to be used that way.
#staticmethod
def ping_system(IP):
subprocess.call(["ping",IP], shell=True) # OH GOD SECURITY FLAW HERE
# group this with Systems because maybe that's how I want it? It's an
# aesthetic choice. Note that this pings ANY system and requires an
# argument of an IP address!
#classmethod
def type_of_system(cls):
return cls.type_
# imagine I had a bunch of objects that inherited from System, each w/
# a different type_, but they all inherit this....
#property
def description(self):
return "Site {}, Device {} # {}".format(self._sitenum,
self._devicenum,
self._IP)
#description.setter
def description(self, *args):
if len(args) == 3:
self._sitenum, self._devicenum, self._IP = args
elif len(args) == 1 and len(args[0]) == 3:
self._sitenum, self._devicenum, self._IP = args[0]
else:
raise ValueError("Redefine description as Sitenum, Devicenum, IP")
Example:
computer = System(1, 1, '192.168.100.101')
System.ping_system('192.160.100.101') # works
computer.type_of_system # "Base system"
computer.description # "Site 1, Device 1 # 192.168.100.101"
new_description = [1, 2, '192.168.100.102']
computer.description = new_description
# invokes description.setter
computer._devicenum # is 2 after the setter does its magic.
OK, in C# we have something like:
public static string Destroy(this string s) {
return "";
}
So basically, when you have a string you can do:
str = "This is my string to be destroyed";
newstr = str.Destroy()
# instead of
newstr = Destroy(str)
Now this is cool because in my opinion it's more readable. Does Python have something similar? I mean instead of writing like this:
x = SomeClass()
div = x.getMyDiv()
span = x.FirstChild(x.FirstChild(div)) # so instead of this
I'd like to write:
span = div.FirstChild().FirstChild() # which is more readable to me
Any suggestion?
You can just modify the class directly, sometimes known as monkey patching.
def MyMethod(self):
return self + self
MyClass.MyMethod = MyMethod
del(MyMethod)#clean up namespace
I'm not 100% sure you can do this on a special class like str, but it's fine for your user-defined classes.
Update
You confirm in a comment my suspicion that this is not possible for a builtin like str. In which case I believe there is no analogue to C# extension methods for such classes.
Finally, the convenience of these methods, in both C# and Python, comes with an associated risk. Using these techniques can make code more complex to understand and maintain.
You can do what you have asked like the following:
def extension_method(self):
#do stuff
class.extension_method = extension_method
I would use the Adapter pattern here. So, let's say we have a Person class and in one specific place we would like to add some health-related methods.
from dataclasses import dataclass
#dataclass
class Person:
name: str
height: float # in meters
mass: float # in kg
class PersonMedicalAdapter:
person: Person
def __init__(self, person: Person):
self.person = person
def __getattr__(self, item):
return getattr(self.person, item)
def get_body_mass_index(self) -> float:
return self.person.mass / self.person.height ** 2
if __name__ == '__main__':
person = Person('John', height=1.7, mass=76)
person_adapter = PersonMedicalAdapter(person)
print(person_adapter.name) # Call to Person object field
print(person_adapter.get_body_mass_index()) # Call to wrapper object method
I consider it to be an easy-to-read, yet flexible and pythonic solution.
You can change the built-in classes by monkey-patching with the help of forbidden fruit
But installing forbidden fruit requires a C compiler and unrestricted environment so it probably will not work or needs hard effort to run on Google App Engine, Heroku, etc.
I changed the behaviour of unicode class in Python 2.7 for a Turkish i,I uppercase/lowercase problem by this library.
# -*- coding: utf8 -*-
# Redesigned by #guneysus
import __builtin__
from forbiddenfruit import curse
lcase_table = tuple(u'abcçdefgğhıijklmnoöprsştuüvyz')
ucase_table = tuple(u'ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZ')
def upper(data):
data = data.replace('i',u'İ')
data = data.replace(u'ı',u'I')
result = ''
for char in data:
try:
char_index = lcase_table.index(char)
ucase_char = ucase_table[char_index]
except:
ucase_char = char
result += ucase_char
return result
curse(__builtin__.unicode, 'upper', upper)
class unicode_tr(unicode):
"""For Backward compatibility"""
def __init__(self, arg):
super(unicode_tr, self).__init__(*args, **kwargs)
if __name__ == '__main__':
print u'istanbul'.upper()
You can achieve this nicely with the following context manager that adds the method to the class or object inside the context block and removes it afterwards:
class extension_method:
def __init__(self, obj, method):
method_name = method.__name__
setattr(obj, method_name, method)
self.obj = obj
self.method_name = method_name
def __enter__(self):
return self.obj
def __exit__(self, type, value, traceback):
# remove this if you want to keep the extension method after context exit
delattr(self.obj, self.method_name)
Usage is as follows:
class C:
pass
def get_class_name(self):
return self.__class__.__name__
with extension_method(C, get_class_name):
assert hasattr(C, 'get_class_name') # the method is added to C
c = C()
print(c.get_class_name()) # prints 'C'
assert not hasattr(C, 'get_class_name') # the method is gone from C
I'd like to think that extension methods in C# are pretty much the same as normal method call where you pass the instance then arguments and stuff.
instance.method(*args, **kwargs)
method(instance, *args, **kwargs) # pretty much the same as above, I don't see much benefit of it getting implemented in python.
After a week, I have a solution that is closest to what I was seeking for. The solution consists of using getattr and __getattr__. Here is an example for those who are interested.
class myClass:
def __init__(self): pass
def __getattr__(self, attr):
try:
methodToCall = getattr(myClass, attr)
return methodToCall(myClass(), self)
except:
pass
def firstChild(self, node):
# bla bla bla
def lastChild(self, node):
# bla bla bla
x = myClass()
div = x.getMYDiv()
y = div.firstChild.lastChild
I haven't test this example, I just gave it to give an idea for who might be interested. Hope that helps.
C# implemented extension methods because it lacks first class functions, Python has them and it is the preferred method for "wrapping" common functionality across disparate classes in Python.
There are good reasons to believe Python will never have extension methods, simply look at the available built-ins:
len(o) calls o.__len__
iter(o) calls o.__iter__
next(o) calls o.next
format(o, s) calls o.__format__(s)
Basically, Python likes functions.