I need to change a class variable but I don't know if I can do it at runtime because I'm using third party open source software and I'm not very well experienced with pure python inheritance (the software I use provide a custom inheritance system) and python in general. My idea was to inherit the class and change the constructor, but then the objects are initialized with original software classname, so they are not initialized with my init..
I partially solved it by inheriting classes that are using the original software class name and override methods so now they use my classes, but I still cannot reach every method because some of them are static.
Full step about what I was trying to do can be found here
Inherit class Worker on Odoo15
I think that I clould solve the problem if I can use something, like a decorator or something else, to tell parent classes to 'look down' to child class constructor when they are initialized. Is there a way to do that?
I will provide an example:
class A(object):
def __init__(self):
self.timeout = 0
def print_class_info(self):
print(f'Obj A timeout: {self.timeout}')
class B(A):
def __init__(self):
super().__init__()
self.timeout = 10
def print_class_info(self):
print(f'Obj B timeout: {self.timeout}')
# Is it possible, somehow, to make obj_A use the init of B class
# even if the call to the class is on class A?
obj_A = A()
obj_B = B()
obj_A.print_class_info()
obj_B.print_class_info()
OUT:
Obj A timeout: 0
Obj B timeout: 10
Of course, situation is more complex in the real scenario so I'm not sure if I can simply access object A and setup the class variable, I think I would have to do it at runtime, probably need a server restart and I'm not even sure how to access objects at runtime, as I said I'm not very well experienced with pure python.
Maybe there is an easy way but I just don't see it or know it, is it possible to use a subclass constructor with a parent class call, basically?
You can assign any attribute to a class, including a method. This is called monkey patching
# save the old init function
A.__oldinit__ = A.__init__
# create a new function that calls the old one
def custom_init(self):
self.__oldinit__()
self.timeout = 10
# overwrite the old function
# the actual old function will still exist because
# it's referenced as A.__oldinit__ as well
A.__init__ = custom_init
# optional cleanup
del custom_init
Related
TL;DR: Python; I have Parent, Child classes. I have an instance of Parent class, parent. Can I make a Child class instance whose super() is parent?
Somewhat specific use case (workaround available) is as follows: I'd like to make an instance of Logger class (from Python logging module), with _log method overloaded. Methods like logger.info or logger.error call this method with a level specified as either INFO or ERROR etc., I'd like to replace this one method, touch nothing else, and make it all work seamlessly.
Here's some things that don't work (well):
I can't just inherit from logging.Logger instance and overload this one method and constructor, because Logger instances tend to be created via a factory method, logging.getLogger(name). So I can't just overload the constructor of the wrapper like:
class WrappedLogger(logging.Logger):
def __init__(self, ...):
super().__init__(...)
def _log(self, ...):
and expect it to all work OK.
I could make a wrapper class, which provides the methods I'd like to call on the resulting instance, like .info or .error - but then I have to manually come up with all the cases. It also doesn't work well when the _log method is buried a few calls down the stack - there is basically no way to guarantee that any use of the wrapped class will call my desired _log method
I can make a little kludge like so:
class WrappedLogger(logging.Logger):
def __init__(self, parent):
self._parent = parent
def _log(...): # overload
def __getattr__(self, method_name):
return getattr(self._parent, method_name)
now whenever I have an instance of this class, and call, say, wrapped.info(...), it will retrieve the parent method of info, call it, which will then call self._log which in turn points to my wrapped instance. But this feels very ugly.
Similarly, I could take a regular instance of Logger and manually swap out the method; this is maybe a bit less "clever", and less ugly than the above, but similarly underwhelming.
This question has been asked a few times, but in slightly different contexts, where other solutions were proposed. Rather than looking for a workaround, I'm interested in whether there is a native way of constructing a child class instance with the parent instance specified.
Related questions:
Create child class instances from parent class instance, and call parent methods from child class instance - here effectively a workaround is suggested
Python construct child class from parent - here the parent can be created in the child's constructor
If your goal is to supply a custom logger class that is used by getLogger, you can "register" the custom class with the logging manager.
So, let's define a custom logger class
from logging import Logger
class MyLogger(Logger):
def _log(self, level, msg, *args, **kwargs) -> None:
print("my logger wants to log:", msg)
super()._log(level, msg, *args, **kwargs)
Then we tell the global logging manager to use this class instead.
from logging import setLoggerClass
setLoggerClass(MyLogger)
Thank you #Daniil Fajnberg, for pointing out, that setLoggerClass exists.
Now getLogger will instantiate your custom class.
from logging import getLogger
logger = getLogger(__file__)
logger.error("Dummy Error")
This will log the error as normal and also print "my logger wants to log: ...".
Note: The _log method you are overloading is undocumented. Maybe there is a better way to achieve what you want.
If i am understanding correctly, what #Bennet wants is - he has some custom logger classes derived from Logger(Logger acts as interface) like Logger1, Logger2 etc(which implementation gets chosen will vary at runtime). On top of each of this he wants to add some functionality which modifies only the _log function of each of these implementations.
IMO there shouldn't be any direct way to do it, since what you are attempting is trying to modify(not extend) the behaviour of an existing class which is not recommended for OOP paradigm.
The hacky way is clever (found it cool).
def __getattr__(self, method_name):
return getattr(self._parent, method_name)
(I don't think you can do the same in Java)
P.S. Wanted to comment this but i am poor in SO it seems :)
From the way you keep re-phrasing your more general question, it seems you misunderstand how object creation works. You are asking for a
way of constructing a child class instance with the parent instance specified.
There is no such concept as a "parent instance". Inheritance refers to classes, not objects. This means you need to define yourself, what that term is supposed to mean. How would you define a "parent instance"? What should it be and when and how should it be created?
Just to demonstrate that there is no mechanism for creating "parent instances", when a child class instance is created, consider this:
class Foo:
instances = []
def __new__(cls):
print(f"{cls.__name__}.__new__()")
instance = super().__new__(cls)
Foo.instances.append(instance)
return instance
class Bar(Foo):
pass
bar = Bar()
assert len(Foo.instances) == 1
assert Foo.instances[0] is bar
assert type(bar) is Bar
assert isinstance(bar, Foo)
The output is Bar.__new__() and obviously the assertions are passed. This goes to show that when we create an instance of Bar it delegates construction further up the MRO chain (because it doesn't implement its own __new__ method), which results in a call to Foo.__new__. It creates the object (by calling object.__new__) and puts it into its instances list. Foo does not also create another instance of class Foo.
You also seem to misunderstand, what calling the super class does, so I suggest checking out the documentation. In short, it is just an elegant tool to access a related class (again: not an instance).
So, again, your question is ill defined.
If you mean (as #Barmar suggested) that you want a way to copy all the attributes of an instance of Foo over to an instance of Bar, that is another story. In that case, you still need to be careful to define, what exactly you mean by "all attributes".
Typically this would refer to the instances __dict__. But do you also want its __slots__ copied? What about methods? Do you want them copied, too? And do you want to just replace everything on the Bar instance or only update those attributes set on the Foo instance?
I hope you see, what I am getting at. I guess the simplest way is just update the instances __dict__ with values from the other one:
...
class Bar(Foo):
def update_from(self, obj):
self.__dict__.update(obj.__dict__)
foo = Foo()
foo.n = 1
foo.text = "Hi"
bar = Bar()
bar.update_from(foo)
print(bar.n, bar.text) # output: `1 Hi`
And you could of course do that in the __init__ method of Bar, if you wanted. If the initialization of Foo is deterministic and instances keep the initial arguments laying around somewhere, you could instead just call the inherited super().__init__ from Bar.__init__ and pass those initial arguments to it from the instance. Something like this:
class Foo:
def __init__(self, x, y):
self.x = x
self.y = y
self.z = x + y
class Bar(Foo):
def __init__(self, foo_obj):
super().__init__(foo_obj.x, foo_obj.y)
foo = Foo(2, 3)
bar = Bar(foo)
print(bar.z) # output: `5`
I hope this makes things clearer for you.
I have a class
class A:
def sample_method():
I would like to decorate class A sample_method() and override the contents of sample_method()
class DecoratedA(A):
def sample_method():
The setup above resembles inheritance, but I need to keep the preexisting instance of class A when the decorated function is used.
a # preexisting instance of class A
decorated_a = DecoratedA(a)
decorated_a.functionInClassA() #functions in Class A called as usual with preexisting instance
decorated_a.sample_method() #should call the overwritten sample_method() defined in DecoratedA
What is the proper way to go about this?
There isn't a straightforward way to do what you're asking. Generally, after an instance has been created, it's too late to mess with the methods its class defines.
There are two options you have, as far as I see it. Either you create a wrapper or proxy object for your pre-existing instance, or you modify the instance to change its behavior.
A proxy defers most behavior to the object itself, while only adding (or overriding) some limited behavior of its own:
class Proxy:
def __init__(self, obj):
self.obj = obj
def overridden_method(self): # add your own limited behavior for a few things
do_stuff()
def __getattr__(self, name): # and hand everything else off to the other object
return getattr(self.obj, name)
__getattr__ isn't perfect here, it can only work for regular methods, not special __dunder__ methods that are often looked up directly in the class itself. If you want your proxy to match all possible behavior, you probably need to add things like __add__ and __getitem__, but that might not be necessary in your specific situation (it depends on what A does).
As for changing the behavior of the existing object, one approach is to write your subclass, and then change the existing object's class to be the subclass. This is a little sketchy, since you won't have ever initialized the object as the new class, but it might work if you're only modifying method behavior.
class ModifiedA(A):
def overridden_method(self): # do the override in a normal subclass
do_stuff()
def modify_obj(obj): # then change an existing object's type in place!
obj.__class__ = ModifiedA # this is not terribly safe, but it can work
You could also consider adding an instance variable that would shadow the method you want to override, rather than modifying __class__. Writing the function could be a little tricky, since it won't get bound to the object automatically when called (that only happens for functions that are attributes of a class, not attributes of an instance), but you could probably do the binding yourself (with partial or lambda if you need to access self.
First, why not just define it from the beginning, how you want it, instead of decorating it?
Second, why not decorate the method itself?
To answer the question:
You can reassign it
class A:
def sample_method(): ...
pass
A.sample_method = DecoratedA.sample_method;
but that affects every instance.
Another solution is to reassign the method for just one object.
import functools;
a.sample_method = functools.partial(DecoratedA.sample_method, a);
Another solution is to (temporarily) change the type of an existing object.
a = A();
a.__class__ = DecoratedA;
a.sample_method();
a.__class__ = A;
I'm sorry for the rather vague formulation of the question. I'll try to clarify what I mean through a simple example below. I have a class I want to use as a base for other classes:
class parent:
def __init__(self, constant):
self.constant = constant
def addconstant(self, number):
return self.constant + number
The self.constant parameter is paramount for the usability of the class, as the addconstant method depends on it. Therefore the __init__ method takes care of forcing the user to define the value of this parameter. So far so good.
Now, I define a child class like this:
class child(parent):
def __init__(self):
pass
Nothing stops me from creating an instance of this class, but when I try to use the addconstant, it will obviously crash.
b = child()
b.addconstant(5)
AttributeError: 'child' object has no attribute 'constant'
Why does Python allow to do this!? I feel that Python is "too flexible" and somehow breaks the point of OOP and encapsulation. If I want to extend a class to take advantage of inheritance, I must be careful and know certain details of the implementation of the class. In this case, I have to know that forcing the user to set the constant parameter constant is fundamental not to break the usability of the class. Doesn't this somehow break the encapsulation principle?
Because Python is a dynamic language. It doesn't know what attributes are on a parent instance until parent's initializer puts them there. In other words, the attributes of parent are determined when you instantiate parent and not an instant before.
In a language like Java, the attributes of parent would be established at compile time. But Python class definitions are executed, not compiled.
In practice, this isn't really a problem. Yes, if you forget to call the parent class's initializer, you will have trouble. But calling the parent class's initializer is something you pretty much always do, in any language, because you want the parent class's behavior. So, don't forget to do that.
A sometimes-useful technique is to define a reasonable default on the class itself.
class parent:
constant = 0
def __init__(self, constant):
self.constant = constant
def addconstant(self, number):
return self.constant + number
Python falls back to accessing the class's attribute if you haven't defined it on an instance. So, this would provide a fallback value in case you forget to do that.
If a subclass wants to modify the behaviour of inherited methods through static fields, is it thread safe?
More specifically:
class A (object):
_m = 0
def do(self):
print self._m
class B (A):
_m=1
def test(self):
self.do()
class C (A):
_m=2
def test(self):
self.do()
Is there a risk that an instance of class B calling do() would behave as class C is supposed to, or vice-versa, in a multithreading environment? I would say yes, but I was wondering if somebody went through actually testing this pattern already.
Note: This is not a question about the pattern itself, which I think should be avoided, but about its consequences, as I found it in reviewing real life code.
First, remember that classes are objects, and static fields (and for that matter, methods) are attributes of said class objects.
So what happens is that self.do() looks up the do method in self and calls do(self). self is set to whatever object is being called, which itself references one of the classes A, B, or C as its class. So the lookup will find the value of _m in the correct class.
Of course, that requires a correction to your code:
class A (object):
_m = 0
def do(self):
if self._m==0: ...
elif ...
Your original code won't work because Python only looks for _m in two places: defined in the function, or as a global. It won't look in class scope like C++ does. So you have to prefix with self. so the right one gets used. If you wanted to force it to use the _m in class A, you would use A._m instead.
P.S. There are times you need this pattern, particularly with metaclasses, which are kinda-sorta Python's analog to C++'s template metaprogramming and functional algorithms.
Say I have a class A, B and C.
Class A and B are both mixin classes for Class C.
class A( object ):
pass
class B( object ):
pass
class C( object, A, B ):
pass
This will not work when instantiating class C. I would have to remove object from class C to make it work. (Else you'll get MRO problems).
TypeError: Error when calling the metaclass bases
Cannot create a consistent method resolution
order (MRO) for bases B, object, A
However, my case is a bit more complicated. In my case class C is a server where A and B will be plugins that are loaded on startup. These are residing in their own folder.
I also have a Class named Cfactory. In Cfactory I have a __new__ method that will create a fully functional object C. In the __new__ method I search for plugins, load them using __import__, and then assign them to C.__bases__ += (loadedClassTypeGoesHere, )
So the following is a possibility: (made it quite abstract)
class A( object ):
def __init__( self ): pass
def printA( self ): print "A"
class B( object ):
def __init__( self ): pass
def printB( self ): print "B"
class C( object ):
def __init__( self ): pass
class Cfactory( object ):
def __new__( cls ):
C.__bases__ += ( A, )
C.__bases__ += ( B, )
return C()
This again will not work, and will give the MRO errors again:
TypeError: Cannot create a consistent method resolution
order (MRO) for bases object, A
An easy fix for this is removing the object baseclass from A and B. However this will make them old-style objects which should be avoided when these plugins are being run stand-alone (which should be possible, UnitTest wise)
Another easy fix is removing object from C but this will also make it an old-style class and C.__bases__ will be unavailable thus I can't add extra objects to the base of C
What would be a good architectural solution for this and how would you do something like this? For now I can live with old-style classes for the plugins themselves. But I rather not use them.
Think of it this way -- you want the mixins to override some of the behaviors of object, so they need to be before object in the method resolution order.
So you need to change the order of the bases:
class C(A, B, object):
pass
Due to this bug, you need C not to inherit from object directly to be able to correctly assign to __bases__, and the factory really could just be a function:
class FakeBase(object):
pass
class C(FakeBase):
pass
def c_factory():
for base in (A, B):
if base not in C.__bases__:
C.__bases__ = (base,) + C.__bases__
return C()
I don't know the details, so maybe I'm completely off-base here, but it seems like you're using the wrong mechanisms to achieve your design.
First off, why is Cfactory a class, and why does its __new__ method return an instance of something else? That looks like a bizarre way to implement what is quite naturally a function. Cfactory as you've described it (and shown a simplified example) doesn't behave at all like a class; you don't have multiple instances of it that share functionality (in fact it looks like you've made it impossible to construct instances of naturally).
To be honest, C doesn't look very much like a class to me either. It seems like you can't be creating more than one instance of it, otherwise you'd end up with an ever-growing bases list. So that makes C basically a module rather than a class, only with extra boilerplate. I try to avoid the "single-instance class to represent the application or some external system" pattern (though I know it's popular because Java requires that you use it). But the class inheritance mechanism can often be handy for things that aren't really classes, such as your plugin system.
I would've done this with a classmethod on C to find and load plugins, invoked by the module defining C so that it's always in a good state. Alternatively you could use a metaclass to automatically add whatever plugins it finds to the class bases. Mixing the mechanism for configuring the class in with the mechanism for creating an instance of the class seems wrong; it's the opposite of flexible de-coupled design.
If the plugins can't be loaded at the time C is created, then I would go with manually invoking the configurator classmethod at the point when you can search for plugins, before the C instance is created.
Actually, if the class can't be put into a consistent state as soon as it's created I would probably rather go for dynamic class creation than modifying the bases of an existing class. Then the system isn't locked into the class being configured once and instantiated once; you're at least open to the possibility of having multiple instances with different sets of plugins loaded. Something like this:
def Cfactory(*args, **kwargs):
plugins = find_plugins()
bases = (C,) + plugins
cls = type('C_with_plugins', bases, {})
return cls(*args, **kwargs)
That way, you have your single call to create your C instance with gives you a correctly configured instance, but it doesn't have strange side effects on any other hypothetical instances of C that might already exist, and its behaviour doesn't depend on whether it's been run before. I know you probably don't need either of those two properties, but it's barely more code than you have in your simplified example, and why break the conceptual model of what classes are if you don't have to?
There is a simple workaround: Create a helper-class, with a nice name, like PluginBase. And use that the inherit of, instead of object.
This makes the code more readable (imho) and it circumstances the bug.
class PluginBase(object): pass
class ServerBase(object): pass
class pluginA(PluginBase): "Now it is clearly a plugin class"
class pluginB(PluginBase): "Another plugin"
class Server1(ServerBase, pluginA, pluginB): "This works"
class Server2(ServerBase): pass
Server2.__bases__ += (pluginA,) # This also works
As note: Probably you don't need the factory; it's needed in C++, but hardly in Python