Monkey patching class functions and properties with an existing instance in Jupyter - python

When I'm prototyping a new project on Jupyter, I sometimes find that I want to add/delete methods to an instance. For example:
class A(object):
def __init__(self):
# some time-consuming function
def keep_this_fxn(self):
return 'hi'
a = A()
## but now I want to make A -> A_new
class A_new(object):
def __init__(self, v):
# some time-consuming function
self._new_prop = v
def keep_this_fxn(self):
return 'hi'
#property
def new_prop(self):
return self._new_prop
def new_fxn(self):
return 'hey'
Without having to manually do A.new_fxn = A_new.new_fxn or reinitializing the instance, is it possible to have this change done automatically? Something like
def update_instance(a, A_new)
# what's here?
a = update_instance(a, A_new(5)) ## should not be as slow as original initialization!
>>> type(a) ## keeps the name, preferably!
<A>
>>> a.keep_this_fxn() ## same as the original
'hi'
>>> a.new_fxn(). ## but with new functions
'hey'
>>> a.new_prop ## and new properties
5
Related posts don't seem to cover this, especially new properties and new args:
How to update instance of class after class method addition?
Monkey patching class and instance in Python
Here's my current attempt:
def update_class_instance(instance, NewClass, new_method_list):
OrigClass = type(instance).__mro__[0]
for method in new_method_list:
setattr(OrigClass, method, getattr(NewClass, method))
but (a) I still have to specify new_method_list (which I prefer to be handled automatically if possible, and (b) I have no idea what to do about the new properties and args.

Related

Possible to hijack class definition with decorators?

Say I have a
class A:
def __init__(self, *args):
pass
and I want an decorator that copies A's definition and extend it with the new class.
def decorator(cls): # some decorator here
# make a new class which inherits from A
# return it while preserving the original A
Is that possible? (PS: This is to avoid maintainence problems.)
When you invoke a function using decorator syntax:
#my_decorator_function
class A:
pass
The decorator function's return value will replace the existing definition of A. So if you want it to create a new class and "return it while preserving the original A", you've got a tricky challenge. What you return will replace A, so you need to decide if that should be the original A or the new class. You can put the other one somewhere else.
For instance, this decorator would replace A with a subclass, and the subclass will make the original A class available as a class attribute named _orig:
def class_decorator(cls):
class SubClass(cls):
_orig = cls
# add other stuff here?
return SubClass
You can add extra logic to copy the original class's __name__ and __doc__ into the new class if you want to. You could also turn the logic around, and add SubClass as an attribute of cls before returning the otherwise unmodified cls.
Using #decorator is not the only possible syntax. You can put B = decorator(A) after the class definition.
class A:
...
B = decorator(A)
Now you still have a reference on the undecorated A, and you have a decorated version B.
The other answers have done a good job, but to make it crystal clear why you don't want to do this:
def dec(cls):
new_cls = type(cls.__name__, (cls,), {})
return new_cls
#dec
class A():
pass
Now inspect the method resolution order class A:
>>> A.__mro__
(<class '__main__.A'>, <class '__main__.A'>, <class 'object'>)
>>> classes = A.__mro__
>>> classes[0].__name__
'A'
>>> classes[1].__name__
'A'
TWO class As! Are they the same?
>>> classes[0] is classes[1]
False
Nope; different. The current variable A is pointing to the lowest one of course:
>>> A is classes[0]
True
But now you've lost name-access to the parent. That's usually not optimal.
In short: you are creating a metric ton of confusion and ambiguity for yourself a few months from now when you have forgotten all about what you did. Do something else.
If you really want to, here is an idea for spinning out new subclasses:
def add_babymaker(cls):
'''Adds a method for making new child classes.'''
def babymaker(name=None):
'''Creates a new child class based on the parent class.'''
name = name if name is not None else cls.__name__
new_cls = type(name, (cls,), {})
return new_cls
cls.babymaker = babymaker
return cls
#add_babymaker
class A():
pass
B = A.babymaker('B')
C = A.babymaker('C')
ANew = A.babymaker()
I think I have worked it out. That's not really a good idea.
def make_variant(cls):
suffix='VARIANT'
new = type(cls.__name__+suffix, (cls, ), {})
# new.__repr__ = lambda self: 'HELLO' # Just do whatever needed here
assert cls.__name__ + suffix not in globals()
globals()[cls.__name__+suffix] = new # Think twice about this line
return cls
#make_variant
class A:
def __init__(self):
pass
print(AVARIANT(), A())

Creating methods on the fly for a class instance

The following program is unable to create a function of a class
class MyClass(object):
def __init__(self, name=""):
self.name = name
def read_name(self):
return self.name
# First argument should be a ref to class
def callback(fcn, arg):
fcn.name=arg
# Create a instance of class
a = MyClass("Blue")
# Lets add new member functions
setattr(a, 'callback1', callback)
setattr(a, 'callback2', callback)
print a.read_name()
print a.callback1("purple") #! FAILS
print a.callback2("cyan") #! FAILS
What is the right way of creating a class member function automatically?
I want to create 'N' callback functions, they all will modify some common/uncommon class data (A shared dict)
EDIT 1
I wish to collect information from 'N' separate/parallel threads by passing callback functions. I do not know beforehand how many callback functions I need thus I want to create them on fly.
EDIT 2
I have a dictionary(d) where I am storing the information of different processes. The dictionary(d) is accessed within the callback. But because the same callback function is called at different threads, the dictionary data gets garbled. As a quickfix, I thought of creating separate callbacks.
If you know what you're doing, you'd want to try
import types
setattr(a, 'callback1', types.MethodType(callback, a, MyClass))
In short: when grafting a method, assign it to the class, not to the instance.
Here's an elucidating example.
class A(object):
"""As trivial as a class can get."""
def foo(self):
return self.bar(1) + self.baz()
# Rework everything!
def new_bar(self, x):
return 'I got %r' % x
def new_baz(self):
return ' and I\'m okay!'
A.bar = new_bar
A.baz = new_baz
print A().foo()
Now grafting method to an instance.
a = A()
# An instance attribute is a bound method;
# when we replace it with a function, we lose access to self.
a.bar = lambda x: x * 100
A.baz = lambda self: 42
assert a.foo() == 142
# We can do better, though.
from types import MethodType
a2 = A()
a2.foo = MethodType(lambda self: 'I know myself, my class is %s' % self.__class__.__name__, a2)
print a2.foo()
Note how you don't need setattr to set an attribute, even an unknown attribute. You may remember that you don't use setattr in __init__ either.
You can't add a class method to an instance; you have to add it to the class:
setattr(MyClass, 'callback1', callback)
But it's still a terrible idea. Why would you want this functionality?
Edit: keep your callbacks in a container instead:
class MyClass(object):
def __init__(self, name=""):
self.name = name
self.callbacks = []
def callback(self, idx, arg):
self.callbacks[idx](self, arg)
# First argument should be a ref to class
def callback(fcn, arg):
fcn.name=arg
# Create a instance of class
a = MyClass("Blue")
# Lets add new member functions
a.callbacks.append(callback)
a.callbacks.append(callback)
print a.name
a.callback(0, "purple")
print a.name
a.callback(1, "cyan")
print a.name

Nice way to call a method of a property

Imagine the following code (which is totally useless if taken alone):
# define a property with additional methods
class P(property):
def __init__(self, name):
property.__init__(self,
fget=lambda self: self._get(name),
fset=lambda self, v: self._set(name, v))
self._name = name
def some_fn(self):
print('name: ' + self._name)
# define a class with two 'enhanced' properties
class C:
p1 = P('p1')
p2 = P('p2')
def __init__(self):
self._values = {}
def _get(self, name):
return self._values[name]
def _set(self, name, v):
self._values[name] = v
c = C()
c.p1 = 5
c.p2 = c.p1
print(c.p1, c.p2)
I just create a class C with two properties which have an extra method some_fn().
The problem is now: you can't call some_fn() easily by just writing c.p1.some_fn() because you would evaluate c.p1 first, which results in some value which doesn't provide the method any more.
I've tried to find some workarounds / approaches for calling some_fn in the context of a certain property, not it's value but I'm not happy yet.
My goal is quite simple:
I want to be able read/assign properties without boilerplate:
c.p1 = c.p2 instead of c.p1.set(c.p2.get())
The way I call the extra method/function must be easy to read/write
I want to write code that can be statically verified by pylint, mypy etc. so some_fn('c.p1') is not an option because it can't be checked whether 'c.p1' is a valid attribute of an existing objectc`.
some_fn doesn't have to be a method. It can be a function or any other way to request functionality in context of a property
I don't even need real properties. Any other way to write s.th.
like c.p1 == c.p2 (e.g. using __getattr__/__setattr__) would be fine, too as long as the get/set operations are still trackable.
I collected some code to make clear, what I'm talking about:
# ==== What I want to do ==============
c.p1.some_fn() # <-- this is what I want to write but
# it's invalid since it evaluates to
# 5.some_fn()
some_fn(c.p1) # <-- something like this looks OK, too but
# it evalueates to some_fn(5) (useless)
# ==== These are options that came to mind but I'm not happy with ======
getattr(C, 'p1').some_fn() # <-- this works but it is ugly
some_fn("c.p1") # <-- this is possible, too but I can't
# check integrity statically (pylint/mypy)
c.p1.value = c.p2.value # <-- this is a valid approach but it
c.p1.some_fn() # increases
some_fn(c.p1) # (again) # <-- This can acutally work if you `inspect`
# the call stack inside `C._get()` but
# it's black magic and incredibly slow
with some_fn(): # <-- this can work when `some_fn` changes
c.p1 # some global state which get's evaluated
# inside `C._get()`
My goal is quite simple: I want to be able read/assign properties without boilerplate: c.p1 = c.p2
If that is the goal here, it sounds like you've misunderstood properties, because they already work like that.
class C(object):
#property
def p1(self):
# get value
#p1.setter
def p1(self, val):
# set value
#property
def p2(self):
# get value
#p2.setter
def p2(self, val):
# set value
Then if you have an object c = C(), you can do c.p1 = c.p2, and it'll just work. Sticking more methods onto a property object is the wrong way to go.
If you really want to stick methods onto properties, retrieve the property through the class:
C.p1.some_fn()

Python - extending properties like you'd extend a function

Question
How can you extend a python property?
A subclass can extend a super class's function by calling it in the overloaded version, and then operating on the result. Here's an example of what I mean when I say "extending a function":
# Extending a function (a tongue-in-cheek example)
class NormalMath(object):
def __init__(self, number):
self.number = number
def add_pi(self):
n = self.number
return n + 3.1415
class NewMath(object):
def add_pi(self):
# NewMath doesn't know how NormalMath added pi (and shouldn't need to).
# It just uses the result.
n = NormalMath.add_pi(self)
# In NewMath, fractions are considered too hard for our users.
# We therefore silently convert them to integers.
return int(n)
Is there an analogous operation to extending functions, but for functions that use the property decorator?
I want to do some additional calculations immediately after getting an expensive-to-compute attribute. I need to keep the attribute's access lazy. I don't want the user to have to invoke a special routine to make the calculations. basically, I don't want the user to ever know the calculations were made in the first place. However, the attribute must remain a property, since i've got legacy code I need to support.
Maybe this is a job for decorators? If I'm not mistaken, decorator is a function that wraps another function, and I'm looking to wrap a property with some more calculations, and then present it as a property again, which seems like a similar idea... but I can't quite figure it out.
My Specific Problem
I've got a base class LogFile with an expensive-to-construct attribute .dataframe. I've implemented it as a property (with the property decorator), so it won't actually parse the log file until I ask for the dataframe. So far, it works great. I can construct a bunch (100+) LogFile objects, and use cheaper methods to filter and select only the important ones to parse. And whenever I'm using the same LogFile over and over, i only have to parse it the first time I access the dataframe.
Now I need to write a LogFile subclass, SensorLog, that adds some extra columns to the base class's dataframe attribute, but I can't quite figure out the syntax to call the super class's dataframe construction routines (without knowing anything about their internal workings), then operate on the resulting dataframe, and then cache/return it.
# Base Class - rules for parsing/interacting with data.
class LogFile(object):
def __init__(self, file_name):
# file name to find the log file
self.file_name = file_name
# non-public variable to cache results of parse()
self._dataframe = None
def parse(self):
with open(self.file_name) as infile:
...
...
# Complex rules to interpret the file
...
...
self._dataframe = pandas.DataFrame(stuff)
#property
def dataframe(self):
"""
Returns the dataframe; parses file if necessary. This works great!
"""
if self._dataframe is None:
self.parse()
return self._dataframe
#dataframe.setter
def dataframe(self,value):
self._dataframe = value
# Sub class - adds more information to data, but does't parse
# must preserve established .dataframe interface
class SensorLog(LogFile):
def __init__(self, file_name):
# Call the super's constructor
LogFile.__init__(self, file_name)
# SensorLog doesn't actually know about (and doesn't rely on) the ._dataframe cache, so it overrides it just in case.
self._dataframe = None
# THIS IS THE PART I CAN'T FIGURE OUT
# Here's my best guess, but it doesn't quite work:
#property
def dataframe(self):
# use parent class's getter, invoking the hidden parse function and any other operations LogFile might do.
self._dataframe = LogFile.dataframe.getter()
# Add additional calculated columns
self._dataframe['extra_stuff'] = 'hello world!'
return self._dataframe
#dataframe.setter
def dataframe(self, value):
self._dataframe = value
Now, when these classes are used in an interactive session, the user should be able to interact with either in the same way.
>>> log = LogFile('data.csv')
>>> print log.dataframe
#### DataFrame with 10 columns goes here ####
>>> sensor = SensorLog('data.csv')
>>> print sensor.dataframe
#### DataFrame with 11 columns goes here ####
I have lots of existing code that takes a LogFile instance which provides a .dataframe attribute and dos something interesting (mostly plotting). I would LOVE to have SensorLog instances present the same interface so they can use the same code. Is it possible to extend the super-class's dataframe getter to take advantage of existing routines? How? Or am I better off doing this a different way?
Thanks for reading that huge wall of text. You are an internet super hero, dear reader. Got any ideas?
You should be calling the superclass properties, not bypassing them via self._dataframe. Here's a generic example:
class A(object):
def __init__(self):
self.__prop = None
#property
def prop(self):
return self.__prop
#prop.setter
def prop(self, value):
self.__prop = value
class B(A):
def __init__(self):
super(B, self).__init__()
#property
def prop(self):
value = A.prop.fget(self)
value['extra'] = 'stuff'
return value
#prop.setter
def prop(self, value):
A.prop.fset(self, value)
And using it:
b = B()
b.prop = dict((('a', 1), ('b', 2)))
print(b.prop)
Outputs:
{'a': 1, 'b': 2, 'extra': 'stuff'}
I would generally recommend placing side-effects in setters instead of getters, like this:
class A(object):
def __init__(self):
self.__prop = None
#property
def prop(self):
return self.__prop
#prop.setter
def prop(self, value):
self.__prop = value
class B(A):
def __init__(self):
super(B, self).__init__()
#property
def prop(self):
return A.prop.fget(self)
#prop.setter
def prop(self, value):
value['extra'] = 'stuff'
A.prop.fset(self, value)
Having costly operations within a getter is also generally to be avoided (such as your parse method).
If I understand correctly what you want to do is call the parent's method from the child instance. The usual way to do that is by using the super built-in.
I've taken your tongue-in-cheek example and modified it to use super in order to show you:
class NormalMath(object):
def __init__(self, number):
self.number = number
def add_pi(self):
n = self.number
return n + 3.1415
class NewMath(NormalMath):
def add_pi(self):
# this will call NormalMath's add_pi with
normal_maths_pi_plus_num = super(NewMath, self).add_pi()
return int(normal_maths_pi_plus_num)
In your Log example, instead of calling:
self._dataframe = LogFile.dataframe.getter()
you should call:
self._dataframe = super(SensorLog, self).dataframe
You can read more about super here
Edit: Even thought the example I gave you deals with methods, to do the same with #properties shouldn't be a problem.
You have some possibilities to consider:
1/ Inherit from logfile and override parse in your derived sensor class. It should be possible to modify your methods that work on dataframe to work regardless of the number of members that dataframe has - as you are using pandas a lot of it is done for you.
2/ Make sensor an instance of logfile then provide its own parse method.
3/ Generalise parse, and possibly some of your other methods, to use a list of data descriptors and possibly a dictionary of methods/rules either set in your class initialiser or set by a methods.
4/ Look at either making more use of the methods already in pandas, or possibly, extending pandas to provide the missing methods if you and others think that they would be accepted into pandas as useful extensions.
Personally I think that you would find the benefits of options 3 or 4 to be the most powerful.
The problem is that you're missing a self going into the parent class. If your parent is a singleton then a #staticmethod should work.
class X():
x=1
#staticmethod
def getx():
return X.x
class Y(X):
y=2
def getyx(self):
return X.getx()+self.y
wx = Y()
wx.getyx()
3

Static variable inheritance in Python

I'm writing Python scripts for Blender for a project, but I'm pretty new to the language. Something I am confused about is the usage of static variables. Here is the piece of code I am currently working on:
class panelToggle(bpy.types.Operator):
active = False
def invoke(self, context, event):
self.active = not self.active
return{'FINISHED'}
class OBJECT_OT_openConstraintPanel(panelToggle):
bl_label = "openConstraintPanel"
bl_idname = "openConstraintPanel"
The idea is that the second class should inherit the active variable and the invoke method from the first, so that calling OBJECT_OT_openConstraintPanel.invoke() changes OBJECT_OT_openConstraintPanel.active. Using self as I did above won't work however, and neither does using panelToggle instead. Any idea of how I go about this?
use type(self) for access to class attributes
>>> class A(object):
var = 2
def write(self):
print type(self).var
>>> class B(A):
pass
>>> B().write()
2
>>> B.var = 3
>>> B().write()
3
>>> A().write()
2
You can access active through the class it belongs to:
if panelToggle.active:
# do something
If you want to access the class variable from a method, you could write:
def am_i_active(self):
""" This method will access the right *class* variable by
looking at its own class type first.
"""
if self.__class__.active:
print 'Yes, sir!'
else:
print 'Nope.'
A working example can be found here: http://gist.github.com/522619
The self variable (named self by convention) is the current instance of the class, implicitly passed but explicitely recieved.
class A(object):
answer = 42
def add(self, a, b):
""" ``self`` is received explicitely. """
return A.answer + a + b
a = A()
print a.add(1, 2) # ``The instance -- ``a`` -- is passed implicitely.``
# => 45
print a.answer
# => print 42

Categories