I'm trying add new methods dynamically in python in the constructor ...
Context
I wrote a Python module in C++ that interacts with my signal processing libraries written in C ...
I have generic method to access my modules (called filters) parameters.
Eg.:
int my_filter_set( void * filter_handle, const char * method_id, void * p_arg );
So my cpython wraps it like this :
filter.set(method_id,value)
And I have access to all my method ids.
>>> filter.setters
['SampleRate','...']
Goal
I would like to generate setters like :
>>> filter.setSampleRate(value)
in a derived class.
class Filter(BaseFilter):
'''
classdocs
'''
def __init__(self, name, library):
'''
Constructor
'''
BaseFilter.__init__(self,name,library)
for setter_id in self.setters:
code = 'self.set(' + setter_id + ',value)'
# #todo add the method body to a new method called 'set' + method_id (with 'value' as argument)
Questions are ...
Is it possible ?
With python types module ?
How to do it ?
I already checked this but I don't see the way I can adapt it to my purpose.
Thanks.
Rather than create the methods dynamically, you can check for method names that match your setters and then dispatch dynamically to your set() method. This can be achieved by overriding __getattribute__() in your Filter class:
class BaseFilter(object):
setters = ('SampleRate', 'Gain', 'BitDepth',)
def set(self, method_id, value):
print 'BaseFilter.set(): setting {} to {}'.format(method_id, value)
class Filter(BaseFilter):
def __getattribute__(self, name):
method_id = name.lstrip('set')
if name.startswith('set') and method_id in super(Filter, self).setters:
def _set(value):
return self.set(method_id, value)
return _set
else:
return super(Filter, self).__getattribute__(name)
Here, for methods that are to be dispatched to BaseFilter.set(), __getattribute__() returns a wrapped version of BaseFilter.set() that has access to the method_id in a closure.
>>> f = Filter()
>>> f.setSampleRate(44000)
BaseFilter.set(): setting SampleRate to 44000
>>> f.setBitDepth(16)
BaseFilter.set(): setting BitDepth to 16
>>> f.setGain(100)
BaseFilter.set(): setting Gain to 100
>>> f.setHouseOnFire(999)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 9, in __getattribute__
AttributeError: 'Filter' object has no attribute 'setHouseOnFire'
I'm not sure if your setter method names are available before the class definition is first encountered, if so, using a meta class to create the class might be appropriate. If they're only available at runtime, then this answer should direct you the way to your solution: Adding a Method to an Existing Object Instance
From your description I cannot tell whether you looking for a way to create Python functions at run time or you're looking for a way to provide Python bindings for your existing code base. There are many ways to make your code available to Python programmers.
Related
I am trying to create a class which gets given a function, which will then be run from that instance. However, when I tried to use staticmethod, I discovered that there is a difference between using the decorator and just passing staticmethod a function.
class WithDec():
def __init__(self):
pass
#staticmethod
def stat(val):
return val + 1
def OuterStat(val):
return val + 1
class WithoutDec():
def __init__(self, stat):
self.stat = staticmethod(stat)
With these two classes, the following occurs.
>>> WithDec().stat(2)
3
>>> WithoutDec(OuterStat).stat(2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'staticmethod' object is not callable
What is going on, and what can I do to stop it.
Static methods still work through the descriptor protocol, meaning that when it is a class attribute, accessing it via an instance still means that the __get__ method will be called to return an object that actually gets called. That is,
WithDec().stat(2)
is equivalent to
w = WithDec()
w.stat(2)
which is equivalent to
WithDec.stat.__get__(w, WithDec)(2)
However, the descriptor protocol is not invoked when the static method is an instance attribute, as is the case with WithoutDec. In that case
WithoutDec().stat(2)
tries to call the literal staticmethod instance stat, not the the function returned by stat.__get__.
What you wanted was to use staticmethod to create a class attribute, just not via decorator syntax:
class WithoutDec():
def stat(val):
return val + 1
stat = staticmethod(stat)
You first bind stat to a regular function (it's not really an instance method until you try to use it as an instance method), then replace the function with a staticmethod instance wrapping the original function.
The problem is that you are trying to use staticmethod() inside __init__, which is used to create an instance of the class, instead of at the class level directly, which defines the class, its methods and its static methods.
This code works:
def OuterStat(val):
return val + 1
class WithoutDec():
stat = staticmethod(OuterStat)
>>> WithoutDec.stat(2)
3
Note that trying to create an instance of WithoutDec with its own, different, version of stat, is contrary to the meaning of a method being static.
I found a very inspiring solution on this thread. Indeed your code is not very pythonic, and attributes a static method to an attribute of an instance of your class. The following code works:
class WithoutDec():
stat = None
#staticmethod
def OuterStat(val):
return val + 1
then you call:
my_without_dec = WithoutDec()
my_without_dec.stat = WithotuDec.OuterStat
my_without_dec.stat(2)
later if you want to create a new method, you call:
def new_func(val):
return val+1
WithoutDec.newStat = staticmethod(new_func)
my_without_dec.stat = WithoutDec.newStat
my_without_dec.stat(2)
Yes -
In this case, you just have to add the function as an attribute of the instance, it will work as expected, no need for any decorators:
def OuterStat(val):
return val + 1
class WithoutDec():
def __init__(self, stat):
self.stat = stat
The thing is: there is a difference if a function is an attribute of the class or an attribute of the instance. When it is set inside an instance method with self.func = X, it becomes an instance attribute - Python retrieves it the way it was stored, with no modifications, and it is simply another reference to the original function that can be called.
When a function is stored as a class attibute, instead, the default behavior is that it is used as an instance method: upon retrieving the function from an instance, Python arranges things so that self will be injected as the first argument to that function. In this case, the decorators #classmethod and #staticmethod exist to modify this behavior (injetct the class for classmethod or make no injection for staticmethod).
The thing is that staticmethod does not return a function - it returns a descriptor to be used as a class attribute, so that when the decorated function is retrieved from a class, it works as a plain function.
(Internal detail: all 3 behaviors: instance method, classmethod and staticmethod are implementing by having an appropriate __get__ method on the object that is used as an attribute to the class).
NB: There were some discussions in making "staticmethod" to become itself "callable", and simply call the wrapped function - I just checked it made it into Pythonn 3.10 beta 1. This means that your example code will work as is for Python 3.10 - nonetheless, the staticmethod call there is redundant, as stated in the beggining of this answer, and should not be used.
It used to be possible to set internal functions like __len__() at runtime. Here is an example:
#! /usr/bin/python3
import sys
class FakeSequence:
def __init__(self):
self.real_sequence = list()
self.append = self.real_sequence.append
self.__len__ = self.real_sequence.__len__
def workaround__len__(self):
return len(self.real_sequence)
if __name__ == '__main__':
fake_sequence = FakeSequence()
fake_sequence.append(1)
fake_sequence.append(2)
fake_sequence.append(3)
length = len(fake_sequence)
sys.stdout.write("len(fake_sequence) is %d\n" % (length))
Here are the results when you try to run it:
$ python2 len_test
len(fake_sequence) is 3
$ python3 len_test
Traceback (most recent call last):
File "len_test", line 18, in <module>
length = len(fake_sequence)
TypeError: object of type 'FakeSequence' has no len()
If I define the __len__() method as part of the class (remove the 'workaround' above), it works as you would expect. If I define __len__() and reassign it as above FakeSequence.__len__() is called, it does not access the newly assigned __len__(), it always calls the FakeSequence class method.
Can you point me to documentation that would help explain why assigning instance methods for member functions no longer works? Note that assigning non-double-underscore methods still works fine. I can work around this easily enough, I'm more concerned that I missed something fundamental in the transition from Python 2 to Python 3. The behavior above is consistent with the Python 3 interpreters I have easy access to (3.4, 3.6, 3.7).
Magic methods are only looked up on classes, not on instances, as documented here. And it's also the case in Py2 for new-style classes (cf https://docs.python.org/2.7/reference/datamodel.html#special-method-lookup-for-new-style-classes).
I assume the main motivations is to cut down on lookups for better performances, but there might be other reasons, can't tell.
EDIT: actually, the motivations are clearly explained in the 2.7 doc:
The rationale behind this behaviour lies with a number of special methods such as hash() and repr() that are implemented by all objects, including type objects. If the implicit lookup of these methods used the conventional lookup process, they would fail when invoked on the type object itself:
Then:
Incorrectly attempting to invoke an unbound method of a class in this way is sometimes referred to as ‘metaclass confusion’, and is avoided by bypassing the instance when looking up special methods:
And finally:
In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the getattribute() method even of the object’s metaclass
Bypassing the getattribute() machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods
So that's indeed mostly a performance optimization - which is not much of a surprise when you know about Python's attribute lookup mechanism and how Python's "methods" are implemented.
This behavior is described in the docs here. This has to do with new- and old- style classes in Python 2 and 3. In other words, this shouldn't work in Python 2 if you had inherited from object. The code you posted uses old-style classes in Python 2 and new-style classes in Python 3.
The docs state that, in the interest of speed optimizations by bypassing look-ups, "the special method must be set on the class object itself in order to be consistently invoked by the interpreter."
Tested in Python 3:
You can create your own function, say mylen, and pass it to the class's constructor. The below example uses a function mylen that always returns 5:
import sys
class FakeSequence:
def __init__(self, length_function):
self.real_sequence = list()
self.append = self.real_sequence.append
self.length_function = length_function
def __len__(self):
return self.length_function()
if __name__ == '__main__':
def mylen():
return 5
fake_sequence = FakeSequence(mylen)
fake_sequence.append(1)
fake_sequence.append(2)
fake_sequence.append(3)
length = len(fake_sequence)
sys.stdout.write("len(fake_sequence) is %d\n" % (length))
The __len__ function is a class attribute (search for "magic methods").
In Python3 you should derive your custom class from other (base-)classes, e.g. object (search for "new style classes").
So if len() should be called on your custom class, the most easy way is to inherit from list (which provides append(), too), and override the __len__ method.
import sys
class FakeSequence(list):
def __init__(self, *args, **kwargs):
# not really necessary, leave out __init__ method, if
# you don't have own attributes in your class.
# but if you define an __init__ method, you
# MUST call baseclass.__init__ inside, preferably
# on top of the __init__ method
list.__init__(self, *args, **kwargs)
def __len__(self, *args, **kwargs):
len_of_fakesequence = list.__len__(self, *args, **kwargs)
# here you can do anything about len()
return len_of_fakesequence
if __name__ == '__main__':
fake_sequence = FakeSequence()
fake_sequence.append(1)
fake_sequence.append(2)
fake_sequence.append(3)
length = len(fake_sequence)
sys.stdout.write("len(fake_sequence) is %d\n" % (length))
For sure it's not necessary to inherit from list, new style classes may inherit from any other class, at least object.
In this case you have nothing to override, so every method has to be defined explicitely.
class AnyList(object):
def __init__(self, *args, **kwargs):
self.mylength = 0
def __len__(self, *args, **kwargs): # len()
return self.mylength
def append(self):
self.mylength += 1
if __name__ == '__main__':
fake_sequence = AnyList()
fake_sequence.append()
fake_sequence.append()
print("len(AnyList) is %d" % len(fake_sequence))
# reads out 2
For most precise information, read the Chapter "3. Data model", especially "3.3.7. Emulating container types" of the python documentation.
For me this still works in python3:
In [1]: class Foo: pass
In [2]: len(Foo())
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-955ff12672b5> in <module>()
----> 1 len(Foo())
TypeError: object of type 'Foo' has no len()
In [3]: Foo.__len__ = lambda _: 123
In [4]: len(Foo())
Out[4]: 123
I have a python file called sample.py with a class definition of Sample object in. This object has various variables and the following function:
def ratioDivision(numerator, denominator):
Then, in my main function (in another file), I declare a Sample object x, and attempt to call this function:
x.co2overco = x.ratioDivision(float(x.co2), float(x.co))
However, I get this error:
Traceback (most recent call last):
File "csvReader.py", line 192, in <module>
main(sys.argv[1:])
File "csvReader.py", line 79, in main
x.co2overco = x.ratioDivision(float(x.co2), float(x.co))
TypeError: ratioDivision() takes exactly 2 arguments (3 given)
I can't see how I gave three arguments? Is there a issue with the referencing?
Your method an instance method. It's first parameter should be self
def ratioDivision(self, numerator, denominator):
It sees 3 parameters, because the first parameter is the instance itself.
When an attribute lookup (ie obj.name) references a function that's an attribute of the class, then the attribute resolution mechanism yields a callable method object instead of the function. This method object is a wrapper around the function and instance, and when called it injects the instance as first argument, so in your case
x.ratioDivision(1, 2)
becomes
Sample.__dict__["ratioDivision"](x, 1, 2)
If ratioDivision doesn't need any access to the current instance nor class, you could just make it a plain function in your module (Python is not Java and doesn't require that everything lives in a class).
If you still want it to be accessible thru Sample instances (to support for class-based polymorphic dispatch or just for mere practical reasons - like not having to import both Sample and ratioDivision from your module), you can also make it a staticmethod:
class Sample(object):
#staticmethod
def ratioDivision(numerator, denominator):
return whatever
This being said, given your example use case, ie:
x.co2overco = x.ratioDivision(float(x.co2), float(x.co))
you may want to add a method to your Sample class, something like computeCo2overco() :
class Sample(object):
#staticmethod
def ratioDivision(numerator, denominator):
return whatever
def computeCo2overco(self)
self.co2overco = self.ratioDivision(float(self.co2), float(self.co))
or if ratioDivision is not expensive, just use a computed attribute:
class Sample(object):
#staticmethod
def ratioDivision(numerator, denominator):
return whatever
#property
def co2overco(self):
return self.ratioDivision(float(self.co2), float(self.co))
In which case you can just use:
whatever = x.co2coverco + something
and under the hood, it will call the co2overco() function.
I have a class A which can be 'initialized' in two different ways. So, I provide a 'factory-like' interface for it based on the second answer in this post.
class A(object):
#staticmethod
def from_method_1(<method_1_parameters>):
a = A()
# set parameters of 'a' using <method_1_parameters>
return a
#staticmethod
def from_method_2(<method_2_parameters>):
a = A()
# set parameters of 'a' using <method_2_parameters>
return a
The two methods are different enough that I can't just plug their parameters into the class's __init__. So, class A should be initialized using:
a = A.from_method_1(<method_1_parameters>)
or
a = A.from_method_2(<method_2_parameters>)
However, it is still possible to call the 'default init' for A:
a = A() # just an empty 'A' object
Is there any way to prevent this? I can't just raise NotImplementedError from __init__ because the two 'factory methods' use it too.
Or do I need to use a completely different approach altogether.
Has been a very long time since this question was asked but I think it's interesting enough to be revived.
When I first saw your problem the private constructor concept just popped out my mind. It's a concept important in other OOP languages, but as Python doesn't enforces privacy I didn't really thought about it since Python became my main language.
Therefore, I became curious and I found this "Private Constructor in Python" question. It covers pretty much all about this topic and I think the second answer can be helpful in here.
Basically it uses name mangling to declare a pseudo-private class attribute (there isn't such thing as private variables in Python) and assign the class object to it. Therefore you'll have an as-private-as-Python allows variable to use to check if your initialization was made from an class method or from an outside call. I made the following example based on this mechanism:
class A(object):
__obj = object()
def __init__(self, obj=None):
assert(obj == A.__obj), \
'A object must be created using A.from_method_1 or A.from_method_2'
#classmethod
def from_method_1(cls):
a = A(cls.__obj)
print('Created from method 1!')
return a
#classmethod
def from_method_2(cls):
a = A(cls.__obj)
print('Created from method 2!')
return a
Tests:
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "t.py", line 6, in __init__
'A object must be created using A.from_method_1 or A.from_method_2'
AssertionError: A object must be created using A.from_method_1 or A.from_method_2
>>> A.from_method_1()
Created from method 1!
<t.A object at 0x7f3f7f2ca450>
>>> A.from_method_2()
Created from method 2!
<t.A object at 0x7f3f7f2ca350>
However, as this solution is a workaround with name mangling, it does have one flaw if you know how to look for it:
>>> A(A._A__obj)
<t.A object at 0x7f3f7f2ca450>
SHORT VERSION: External methods bound to an instance can't access private variables directly via self.__privatevarname. Is this a feature or a bug?
EXTENDED VERSION (WITH EXPLANATION AND EXAMPLE):
In Python: Bind an Unbound Method?, Alex Martelli explains a simple method for binding a function to an instance.
Using this method, one can use external functions to set instance methods in a class (in __init__).
However, this breaks down when the function being bound to the instance needs to access private variables. This is because the name mangling occurs during the the compile step in _Py_Mangle, and so the function never has a chance to call __getattribute__('_classname__privatevarname').
For example, if we define a simple external addition function that accesses the private instance variable __obj_val:
def add_extern(self, value):
return self.__obj_val + value
and bind it to each instance in __init__ while also defining a similar instance method add_intern in the class definition
class TestClass(object):
def __init__(self, object_value):
self.__obj_val = object_value
self.add_extern = add_extern.__get__(self, TestClass)
def add_intern(self, value):
return self.__obj_val + value
then the internal method will work but the external bound method will raise an exception:
>>> t = TestClass(0)
>>> t.add_intern(1)
1
>>> t.add_extern(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in add_extern
AttributeError: 'TestClass' object has no attribute '__obj_val'
POSTSCRIPT: You can overcome this shortcoming by overriding __getattribute__ to do the mangling for you:
class TestClass(object):
...
def __getattribute__(self, name):
try:
return super(TestClass, self).__getattribute__(name)
except AttributeError:
# mimic behavior of _Py_Mangle
if not name.startswith('__'): # only private attrs
raise
if name.endswith('__'): # don't mangle dunder attrs
raise
if '.' in name: # don't mangle for modules
raise
if not name.lstrip('_'): # don't mangle if just underscores
raise
mangled_name = '_{cls_name}{attr_name}'.format(
cls_name=self.__class__.__name__, attr_name=name)
return super(TestClass, self).__getattribute__(mangled_name)
However, this doesn't leave the variables private to external callers, which we don't want.
Due to Python's name-mangling, the actual name of the attribute as stored on t is _TestClass__obj_val (as you can see in dir(t)). The external function, not having been defined on a class, is just looking for __obj_val.
The mangled name is stored in the function's code object (e.g. t.add_extern.func_code.co_names) and is read-only, so there's no easy way to update it.
So there's a reason not to use that technique... at least with mangled names.