I'm looking for a way to have a collection of homogeneous objects, wrap them in another object, but have the wrapper object have the same API as the original and forward the corresponding API call to its object members.
class OriginalApi:
def __init__(self):
self.a = 1
self.b = "bee"
def do_something(self, new_a, new_b, put_them_together=None):
self.a = new_a or self.a
self.b = new_b or self.b
if put_them_together is not None:
self.b = "{}{}".format(self.a, self.b)
# etc.
class WrappedApi:
def __init__(self):
self.example_1 = OriginalApi()
self.example_2 = OriginalApi()
Some possible solutions that have been considered, but are inadequate:
Rewriting the whole API Why not? Not adequate because the API is fairly large and expanding. Having to maintain the API in multiple spots is not realistic.
Code example:
class WrappedApi:
def __init__(self):
self.example_1 = OriginalApi()
self.example_2 = OriginalApi()
def do_something(self, new_a, new_b, put_them_together=None):
self.example_1.do_something(new_a, new_b, put_them_together)
self.example_2.do_something(new_a, new_b, put_them_together)
Using a list and a for-loop This changes the API on the object. That said, this is the backup solution in the event I can't find something more elegant. In this case, the WrappedApi class would not exist.
Code example:
wrapped_apis = [OriginalApi(), OriginalApi()]
for wrapped_api in wrapped_apis:
wrapped_api.do_something(1, 2, True)
I tried using
Python Object Wrapper, but I could not see how to have it call multiple sub-objects with the same arguments.
And for anyone curious about the use case, it's actually a collection of several matplotlib axes objects. I don't want to reimplement to entire axes API (it's big), and I don't want to change all the code that makes calls on axes (like plot, step, etc.)
If you're only implementing methods then a generic __getattr__ can do the trick
class Wrapper:
def __init__(self, x):
self.x = x
def __getattr__(self, name):
def f(*args, **kwargs):
for y in self.x:
getattr(y, name)(*args, **kwargs)
return f
For example with x = Wrapper([[], [], []]) after calling x.append(12) all the three list objects will have 12 as last element.
Note that the return value will always be None... an option could be collecting return values and returning them as a list but this of course would "break the API".
I think you have the right idea here
wrapped_apis = [OriginalApi(), OriginalApi()]
for wrapped_api in wrapped_apis:
wrapped_api.do_something(1, 2, True)
You can define your wrapper class by inheriting from list and then handle the API calls to its items once it is created.
class WrapperClass(list):
def __init__(self, api_type):
self.api_type = api_type
for func in dir(api_type):
if callable(getattr(api_type, func)) and not func.startswith("__"):
setattr(self, func, lambda *args, **kwargs:
[getattr(o, func)(*args, **kwargs) for o in self])
w = WrapperClass(OriginalApi)
o1, o2 = [OriginalApi()]*2
w.append(o1)
w.append(o2)
print(w.do_something(1, 2, True))
# [None, None]
print(w[0].b)
# 12
print(w[1].b)
# 12
print(o1.b)
# 12
Here, I'm iterating every method in your API class and creating a method in the wrapper class that applies its arguments to all its list items. It then returns a list comprehension consisting of the results.
Needless to say, you should probably validate the type of a new object being appended to this WrapperClass like so,
def append(self, item):
if not isinstance(item, self.api_type):
raise TypeError('Wrong API type. Expected %s'.format(self.api_type))
super(WrapperClass, self).append(item)
Related
Suppose there is a class A and a factory function make_A
class A():
...
def make_A(*args, **kwars):
# returns an object of type A
both defined in some_package.
Suppose also that I want to expand the functionality of A, by subclassing it,
without overriding the constructor:
from some_package import A, make_A
class B(A):
def extra_method(self, ...):
# adds extra functionality
What I also need is to write a new factory function make_B for subclass B.
The solution I have found so far is
def make_B(*args, **kwargs):
"""
same as make_A except that it returns an object of type B
"""
out = make_A(*args, **kwargs)
out.__class__ = B
return out
This seems to work, but I am a bit worried about directly modifying the
__class__ attribute, as it feels to me like a hack. I am also worried about
unexpected side-effects this modification may have. Is this the recommended
solution or is there a "cleaner" pattern to achieve the same result?
I guess I finally found something not verbose yet still working. For this you need to replace inheritance with composition, this will allow to consume an object A by doing self.a = ....
To mimic the methods of A you can use __getattr__ overload to delegate those methods (and fields) to self.a
The next snippet works for me
class A:
def __init__(self, val):
self.val = val
def method(self):
print(f"A={self.val}")
def make_A():
return A(42)
class B:
def __init__(self, *args, consume_A = None, **kwargs):
if consume_A is None:
self.a = A(*args, **kwargs)
else:
self.a = consume_A
def __getattr__(self, name):
return getattr(self.a, name)
def my_extension(self):
print(f"B={self.val * 100}")
def make_B(*args, **kwargs):
return B(consume_A=make_A(*args, **kwargs))
b = make_B()
b.method() # A=42
b.my_extension() # B=4200
What makes this approach superior to yours is that modifying __class__ is probably not harmless. On the other hand __getattr__ and __getattribute__ are specifically provided as the mechanisms to resolve attributes search in an object. For more details, see this tutorial.
Make your original factory function more general by accepting a class as parameter: remember, everything is an object in Python, even classes.
def make(class_type, *args, **kwargs):
return class_type(*args, **kwargs)
a = make(A)
b = make(B)
Since B has the same parameters as A, you don't need to make an A and then turn it into B: B inherits from A, so it "is an A" and will have the same functionality, plus the extra method that you added.
Hi I'm trying to derive a class from ndarray. I'm sticking to the recipe found in docs but I get an error I do not understand, when I override a __getiem__() function. I'm sure this is how it is supposed to work but I do not understand how to do it correctly. My class that basically adds a "dshape" property looks like:
class Darray(np.ndarray):
def __new__(cls, input_array, dshape, *args, **kwargs):
obj = np.asarray(input_array).view(cls)
obj.SelObj = SelObj
obj.dshape = dshape
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'dshape', 'N')
def __getitem__(self, index):
return self[index]
when I now try to do:
D = Darray( ones((10,10)), ("T","N"))
the interpreter will fail with a maximum depth recursion, because he calls __getitem__ over and over again.
can someone explain to me why and how one would implement a getitem function?
cheers,
David
can someone explain to me why and how one would implement a getitem function?
For your current code, a __getitem__ isn't needed. Your class works fine (except for the undefined SelObj) when I remove the __getitem__ implementation.
The reason for the maximum recursion depth error is the definition of __getitem__, which uses self[index]: a shorthand notation for self.__getitem__(index). If you must override __getitem__, then make sure you call the superclass implementation of __getitem__:
def __getitem__(self, index):
return super(Darray, self).__getitem__(index)
As for why you'd do this: there are lots of reasons for overriding this function, e.g. you might associate names with the rows of an array:
class NamedRows(np.ndarray):
def __new__(cls, rows, *args, **kwargs):
obj = np.asarray(*args, **kwargs).view(cls)
obj.__row_name_idx = dict((n, i) for i, n in enumerate(rows))
return obj
def __getitem__(self, idx):
if isinstance(idx, basestring):
idx = self.__row_name_idx[idx]
return super(NamedRows, self).__getitem__(idx)
Demo:
>>> a = NamedRows(["foo", "bar"], [[1,2,3], [4,5,6]])
>>> a["foo"]
NamedRows([1, 2, 3])
The problem is here:
def __getitem__(self, index):
return self[index]
foo[index] just calls foo.__getitem__(index). But in your case, that just returns foo[index], which just calls foo.__getitem__(index). Which repeats in an infinite loop until you run out of stack space.
If you want to defer to your parent class, you have to do this:
def __getitem__(self, index):
return super(Darray, self)[index]
… or, maybe more explicitly:
def __getitem__(self, index):
return super(Darray, self).__getitem__(index)
I don't understand why you want to inherit a class from np.ndarray type. You can implement the same idea as above with the standard OOP approach. The following example does the same thing as your code, but more elegent. Instead of subclassing, I am just treating the numpy array as a member of my special object that also contains dshape. It simply creates __getitem__() and __setitem__() to behave exactly like we would subscript a np.ndarray object.
class Darray:
def __init__(self, input_array, dshape):
self.array = np.array(input_array)
self.dshape = dshape
def __getitem__(self, item):
return self.array[item]
def __setitem__(self, item, val):
self.array[item] = val
Now you can write further methods to describe the exact behaviour that you want. Whatever dhape was supposed to do to the inherited array, now to do to self.array member.
The added benefit of this approach is that there is no headache of recursion depth, or __array_finalize__, or super(), or any other pitfalls that can occur in this process of subclassing and overloading. There is always a simpler way for intended use cases.
Edit: In my example above, the __getitem__ method does not work for , separated indices for N dimensional arrays. A fix for that,
def __getitem__(self, *args):
return self.array.__getitem__(*args)
I'm attempting to implement a decorator on certain methods in a class so that if the value has NOT been calculated yet, the method will calculate the value, otherwise it will just return the precomputed value, which is stored in an instance defaultdict. I can't seem to figure out how to access the instance defaultdict from inside of a decorator declared outside of the class. Any ideas on how to implement this?
Here are the imports (for a working example):
from collections import defaultdict
from math import sqrt
Here is my decorator:
class CalcOrPass:
def __init__(self, func):
self.f = func
#if the value is already in the instance dict from SimpleData,
#don't recalculate the values, instead return the value from the dict
def __call__(self, *args, **kwargs):
# can't figure out how to access/pass dict_from_SimpleData to here :(
res = dict_from_SimpleData[self.f.__name__]
if not res:
res = self.f(*args, **kwargs)
dict_from_SimpleData[self.f__name__] = res
return res
And here's the SimpleData class with decorated methods:
class SimpleData:
def __init__(self, data):
self.data = data
self.stats = defaultdict() #here's the dict I'm trying to access
#CalcOrPass
def mean(self):
return sum(self.data)/float(len(self.data))
#CalcOrPass
def se(self):
return [i - self.mean() for i in self.data]
#CalcOrPass
def variance(self):
return sum(i**2 for i in self.se()) / float(len(self.data) - 1)
#CalcOrPass
def stdev(self):
return sqrt(self.variance())
So far, I've tried declaring the decorator inside of SimpleData, trying to pass multiple arguments with the decorator(apparently you can't do this), and spinning around in my swivel chair while trying to toss paper airplanes into my scorpion tank. Any help would be appreciated!
The way you define your decorator the target object information is lost. Use a function wrapper instead:
def CalcOrPass(func):
#wraps(func)
def result(self, *args, **kwargs):
res = self.stats[func.__name__]
if not res:
res = func(self, *args, **kwargs)
self.stats[func.__name__] = res
return res
return result
wraps is from functools and not strictly necessary here, but very convenient.
Side note: defaultdict takes a factory function argument:
defaultdict(lambda: None)
But since you're testing for the existence of the key anyway, you should prefer a simple dict.
You can't do what you want when your function is defined, because it is unbound. Here's a way to achieve it in a generic fashion at runtime:
class CalcOrPass(object):
def __init__(self, func):
self.f = func
def __get__(self, obj, type=None): # Cheat.
return self.__class__(self.f.__get__(obj, type))
#if the value is already in the instance dict from SimpleData,
#don't recalculate the values, instead return the value from the dict
def __call__(self, *args, **kwargs):
# I'll concede that this doesn't look very pretty.
# TODO handle KeyError here
res = self.f.__self__.stats[self.f.__name__]
if not res:
res = self.f(*args, **kwargs)
self.f.__self__.stats[self.f__name__] = res
return res
A short explanation:
Our decorator defines __get__ (and is hence said to be a descriptor). Whereas the default behaviour for an attribute access is to get it from the object's dictionary, if the descriptor method is defined, Python will call that instead.
The case with objects is that object.__getattribute__ transforms an access like b.x into type(b).__dict__['x'].__get__(b, type(b))
This way we can access the bound class and its type from the descriptor's parameters.
Then we create a new CalcOrPass object which now decorates (wraps) a bound method instead of the old unbound function.
Note the new style class definition. I'm not sure if this will work with old-style classes, as I haven't tried it; just don't use those. :) This will work for both functions and methods, however.
What happens to the "old" decorated functions is left as an exercise.
I have a set of arrays that are very large and expensive to compute, and not all will necessarily be needed by my code on any given run. I would like to make their declaration optional, but ideally without having to rewrite my whole code.
Example of how it is now:
x = function_that_generates_huge_array_slowly(0)
y = function_that_generates_huge_array_slowly(1)
Example of what I'd like to do:
x = lambda: function_that_generates_huge_array_slowly(0)
y = lambda: function_that_generates_huge_array_slowly(1)
z = x * 5 # this doesn't work because lambda is a function
# is there something that would make this line behave like
# z = x() * 5?
g = x * 6
While using lambda as above achieves one of the desired effects - computation of the array is delayed until it is needed - if you use the variable "x" more than once, it has to be computed each time. I'd like to compute it only once.
EDIT:
After some additional searching, it looks like it is possible to do what I want (approximately) with "lazy" attributes in a class (e.g. http://code.activestate.com/recipes/131495-lazy-attributes/). I don't suppose there's any way to do something similar without making a separate class?
EDIT2: I'm trying to implement some of the solutions, but I'm running in to an issue because I don't understand the difference between:
class sample(object):
def __init__(self):
class one(object):
def __get__(self, obj, type=None):
print "computing ..."
obj.one = 1
return 1
self.one = one()
and
class sample(object):
class one(object):
def __get__(self, obj, type=None):
print "computing ... "
obj.one = 1
return 1
one = one()
I think some variation on these is what I'm looking for, since the expensive variables are intended to be part of a class.
The first half of your problem (reusing the value) is easily solved:
class LazyWrapper(object):
def __init__(self, func):
self.func = func
self.value = None
def __call__(self):
if self.value is None:
self.value = self.func()
return self.value
lazy_wrapper = LazyWrapper(lambda: function_that_generates_huge_array_slowly(0))
But you still have to use it as lazy_wrapper() not lazy_wrapper.
If you're going to be accessing some of the variables many times, it may be faster to use:
class LazyWrapper(object):
def __init__(self, func):
self.func = func
def __call__(self):
try:
return self.value
except AttributeError:
self.value = self.func()
return self.value
Which will make the first call slower and subsequent uses faster.
Edit: I see you found a similar solution that requires you to use attributes on a class. Either way requires you rewrite every lazy variable access, so just pick whichever you like.
Edit 2: You can also do:
class YourClass(object)
def __init__(self, func):
self.func = func
#property
def x(self):
try:
return self.value
except AttributeError:
self.value = self.func()
return self.value
If you want to access x as an instance attribute. No additional class is needed. If you don't want to change the class signature (by making it require func), you can hard code the function call into the property.
Writing a class is more robust, but optimizing for simplicity (which I think you are asking for), I came up with the following solution:
cache = {}
def expensive_calc(factor):
print 'calculating...'
return [1, 2, 3] * factor
def lookup(name):
return ( cache[name] if name in cache
else cache.setdefault(name, expensive_calc(2)) )
print 'run one'
print lookup('x') * 2
print 'run two'
print lookup('x') * 2
Python 3.2 and greater implement an LRU algorithm in the functools module to handle simple cases of caching/memoization:
import functools
#functools.lru_cache(maxsize=128) #cache at most 128 items
def f(x):
print("I'm being called with %r" % x)
return x + 1
z = f(9) + f(9)**2
You can't make a simple name, like x, to really evaluate lazily. A name is just an entry in a hash table (e.g. in that which locals() or globals() return). Unless you patch access methods of these system tables, you cannot attach execution of your code to simple name resolution.
But you can wrap functions in caching wrappers in different ways.
This is an OO way:
class CachedSlowCalculation(object):
cache = {} # our results
def __init__(self, func):
self.func = func
def __call__(self, param):
already_known = self.cache.get(param, None)
if already_known:
return already_known
value = self.func(param)
self.cache[param] = value
return value
calc = CachedSlowCalculation(function_that_generates_huge_array_slowly)
z = calc(1) + calc(1)**2 # only calculates things once
This is a classless way:
def cached(func):
func.__cache = {} # we can attach attrs to objects, functions are objects
def wrapped(param):
cache = func.__cache
already_known = cache.get(param, None)
if already_known:
return already_known
value = func(param)
cache[param] = value
return value
return wrapped
#cached
def f(x):
print "I'm being called with %r" % x
return x + 1
z = f(9) + f(9)**2 # see f called only once
In real world you'll add some logic to keep the cache to a reasonable size, possibly using a LRU algorithm.
To me, it seems that the proper solution for your problem is subclassing a dict and using it.
class LazyDict(dict):
def __init__(self, lazy_variables):
self.lazy_vars = lazy_variables
def __getitem__(self, key):
if key not in self and key in self.lazy_vars:
self[key] = self.lazy_vars[key]()
return super().__getitem__(key)
def generate_a():
print("generate var a lazily..")
return "<a_large_array>"
# You can add as many variables as you want here
lazy_vars = {'a': generate_a}
lazy = LazyDict(lazy_vars)
# retrieve the variable you need from `lazy`
a = lazy['a']
print("Got a:", a)
And you can actually evaluate a variable lazily if you use exec to run your code. The solution is just using a custom globals.
your_code = "print('inside exec');print(a)"
exec(your_code, lazy)
If you did your_code = open(your_file).read(), you could actually run your code and achieve what you want. But I think the more practical approach would be the former one.
I don't know if this will make sense, but...
I'm trying to dynamically assign methods to an object.
#translate this
object.key(value)
#into this
object.method({key:value})
To be more specific in my example, I have an object (which I didn't write), lets call it motor, which has some generic methods set, status and a few others. Some take a dictionary as an argument and some take a list. To change the motor's speed, and see the result, I use:
motor.set({'move_at':10})
print motor.status('velocity')
The motor object, then formats this request into a JSON-RPC string, and sends it to an IO daemon. The python motor object doesn't care what the arguments are, it just handles JSON formatting and sockets. The strings move_at and velocity are just two of what might be hundreds of valid arguments.
What I'd like to do is the following instead:
motor.move_at(10)
print motor.velocity()
I'd like to do it in a generic way since I have so many different arguments I can pass. What I don't want to do is this:
# create a new function for every possible argument
def move_at(self,x)
return self.set({'move_at':x})
def velocity(self)
return self.status('velocity')
#and a hundred more...
I did some searching on this which suggested the solution lies with lambdas and meta programming, two subjects I haven't been able to get my head around.
UPDATE:
Based on the code from user470379 I've come up with the following...
# This is what I have now....
class Motor(object):
def set(self,a_dict):
print "Setting a value", a_dict
def status(self,a_list):
print "requesting the status of", a_list
return 10
# Now to extend it....
class MyMotor(Motor):
def __getattr__(self,name):
def special_fn(*value):
# What we return depends on how many arguments there are.
if len(value) == 0: return self.status((name))
if len(value) == 1: return self.set({name:value[0]})
return special_fn
def __setattr__(self,attr,value): # This is based on some other answers
self.set({attr:value})
x = MyMotor()
x.move_at = 20 # Uses __setattr__
x.move_at(10) # May remove this style from __getattr__ to simplify code.
print x.velocity()
output:
Setting a value {'move_at': 20}
Setting a value {'move_at': 10}
10
Thank you to everyone who helped!
What about creating your own __getattr__ for the class that returns a function created on the fly? IIRC, there's some tricky cases to watch out for between __getattr__ and __getattribute__ that I don't recall off the top of my head, I'm sure someone will post a comment to remind me:
def __getattr__(self, name):
def set_fn(self, value):
return self.set({name:value})
return set_fn
Then what should happen is that calling an attribute that doesn't exist (ie: move_at) will call the __getattr__ function and create a new function that will be returned (set_fn above). The name variable of that function will be bound to the name parameter passed into __getattr__ ("move_at" in this case). Then that new function will be called with the arguments you passed (10 in this case).
Edit
A more concise version using lambdas (untested):
def __getattr__(self, name):
return lambda value: self.set({name:value})
There are a lot of different potential answers to this, but many of them will probably involve subclassing the object and/or writing or overriding the __getattr__ function.
Essentially, the __getattr__ function is called whenever python can't find an attribute in the usual way.
Assuming you can subclass your object, here's a simple example of what you might do (it's a bit clumsy but it's a start):
class foo(object):
def __init__(self):
print "initting " + repr(self)
self.a = 5
def meth(self):
print self.a
class newfoo(foo):
def __init__(self):
super(newfoo, self).__init__()
def meth2(): # Or, use a lambda: ...
print "meth2: " + str(self.a) # but you don't have to
self.methdict = { "meth2":meth2 }
def __getattr__(self, name):
return self.methdict[name]
f = foo()
g = newfoo()
f.meth()
g.meth()
g.meth2()
Output:
initting <__main__.foo object at 0xb7701e4c>
initting <__main__.newfoo object at 0xb7701e8c>
5
5
meth2: 5
You seem to have certain "properties" of your object that can be set by
obj.set({"name": value})
and queried by
obj.status("name")
A common way to go in Python is to map this behaviour to what looks like simple attribute access. So we write
obj.name = value
to set the property, and we simply use
obj.name
to query it. This can easily be implemented using the __getattr__() and __setattr__() special methods:
class MyMotor(Motor):
def __init__(self, *args, **kw):
self._init_flag = True
Motor.__init__(self, *args, **kw)
self._init_flag = False
def __getattr__(self, name):
return self.status(name)
def __setattr__(self, name, value):
if self._init_flag or hasattr(self, name):
return Motor.__setattr__(self, name, value)
return self.set({name: value})
Note that this code disallows the dynamic creation of new "real" attributes of Motor instances after the initialisation. If this is needed, corresponding exceptions could be added to the __setattr__() implementation.
Instead of setting with function-call syntax, consider using assignment (with =). Similarly, just use attribute syntax to get a value, instead of function-call syntax. Then you can use __getattr__ and __setattr__:
class OtherType(object): # this is the one you didn't write
# dummy implementations for the example:
def set(self, D):
print "setting", D
def status(self, key):
return "<value of %s>" % key
class Blah(object):
def __init__(self, parent):
object.__setattr__(self, "_parent", parent)
def __getattr__(self, attr):
return self._parent.status(attr)
def __setattr__(self, attr, value):
self._parent.set({attr: value})
obj = Blah(OtherType())
obj.velocity = 42 # prints setting {'velocity': 42}
print obj.velocity # prints <value of velocity>