How to write factory functions for subclasses? - python

Suppose there is a class A and a factory function make_A
class A():
...
def make_A(*args, **kwars):
# returns an object of type A
both defined in some_package.
Suppose also that I want to expand the functionality of A, by subclassing it,
without overriding the constructor:
from some_package import A, make_A
class B(A):
def extra_method(self, ...):
# adds extra functionality
What I also need is to write a new factory function make_B for subclass B.
The solution I have found so far is
def make_B(*args, **kwargs):
"""
same as make_A except that it returns an object of type B
"""
out = make_A(*args, **kwargs)
out.__class__ = B
return out
This seems to work, but I am a bit worried about directly modifying the
__class__ attribute, as it feels to me like a hack. I am also worried about
unexpected side-effects this modification may have. Is this the recommended
solution or is there a "cleaner" pattern to achieve the same result?

I guess I finally found something not verbose yet still working. For this you need to replace inheritance with composition, this will allow to consume an object A by doing self.a = ....
To mimic the methods of A you can use __getattr__ overload to delegate those methods (and fields) to self.a
The next snippet works for me
class A:
def __init__(self, val):
self.val = val
def method(self):
print(f"A={self.val}")
def make_A():
return A(42)
class B:
def __init__(self, *args, consume_A = None, **kwargs):
if consume_A is None:
self.a = A(*args, **kwargs)
else:
self.a = consume_A
def __getattr__(self, name):
return getattr(self.a, name)
def my_extension(self):
print(f"B={self.val * 100}")
def make_B(*args, **kwargs):
return B(consume_A=make_A(*args, **kwargs))
b = make_B()
b.method() # A=42
b.my_extension() # B=4200
What makes this approach superior to yours is that modifying __class__ is probably not harmless. On the other hand __getattr__ and __getattribute__ are specifically provided as the mechanisms to resolve attributes search in an object. For more details, see this tutorial.

Make your original factory function more general by accepting a class as parameter: remember, everything is an object in Python, even classes.
def make(class_type, *args, **kwargs):
return class_type(*args, **kwargs)
a = make(A)
b = make(B)
Since B has the same parameters as A, you don't need to make an A and then turn it into B: B inherits from A, so it "is an A" and will have the same functionality, plus the extra method that you added.

Related

How to instantiate a subclass type variable from an existing superclass type object in Python

I have a situation where I extend a class with several attributes:
class SuperClass:
def __init__(self, tediously, many, attributes):
# assign the attributes like "self.attr = attr"
class SubClass:
def __init__(self, id, **kwargs):
self.id = id
super().__init__(**kwargs)
And then I want to create instances, but I understand that this leads to a situation where a subclass can only be instantiated like this:
super_instance = SuperClass(tediously, many, attributes)
sub_instance = SubClass(id, tediously=super_instance.tediously, many=super_instance.many, attributes=super_instance.attributes)
My question is if anything prettier / cleaner can be done to instantiate a subclass by copying a superclass instance's attributes, without having to write a piece of sausage code to manually do it (either in the constructor call, or a constructor function's body)... Something like:
utopic_sub_instance = SubClass(id, **super_instance)
Maybe you want some concrete ideas of how to not write so much code?
So one way to do it would be like this:
class A:
def __init___(self, a, b, c):
self.a = a
self.b = b
self.c = c
class B:
def __init__(self, x, a, b, c):
self.x = x
super().__init__(a, b, c)
a = A(1, 2, 3)
b = B('x', 1, 2, 3)
# so your problem is that you want to avoid passing 1,2,3 manually, right?
# So as a comment suggests, you should use alternative constructors here.
# Alternative constructors are good because people not very familiar with
# Python could also understand them.
# Alternatively, you could use this syntax, but it is a little dangerous and prone to producing
# bugs in the future that are hard to spot
class BDangerous:
def __init__(self, x, a, b, c):
self.x = x
kwargs = dict(locals())
kwargs.pop('x')
kwargs.pop('self')
# This is dangerous because if in the future someone adds a variable in this
# scope, you need to remember to pop that also
# Also, if in the future, the super constructor acquires the same parameter that
# someone else adds as a variable here... maybe you will end up passing an argument
# unwillingly. That might cause a bug
# kwargs.pop(...pop all variable names you don't want to pass)
super().__init__(**kwargs)
class BSafe:
def __init__(self, x, a, b, c):
self.x = x
bad_kwargs = dict(locals())
# This is safer: you are explicit about which arguments you're passing
good_kwargs = {}
for name in 'a,b,c'.split(','):
good_kwargs[name] = bad_kwargs[name]
# but really, this solution is not that much better compared to simply passing all
# parameters explicitly
super().__init__(**good_kwargs)
Alternatively, let's go a little crazier. We'll use introspection to dynamically build the dict to pass as arguments. I have not included in my example the case where there are keyword-only arguments, defaults, *args or **kwargs
class A:
def __init__(self, a,b,c):
self.a = a
self.b = b
self.c = c
class B(A):
def __init__(self, x,y,z, super_instance):
import inspect
spec = inspect.getfullargspec(A.__init__)
positional_args = []
super_vars = vars(super_instance)
for arg_name in spec.args[1:]: # to exclude 'self'
positional_args.append(super_vars[arg_name])
# ...but of course, you must have the guarantee that constructor
# arguments will be set as instance attributes with the same names
super().__init__(*positional_args)
I managed to finally do it using a combination of an alt constructor and the __dict__ property of the super_instance.
class SuperClass:
def __init__(self, tediously, many, attributes):
self.tediously = tediously
self.many = many
self.attributes = attributes
class SubClass(SuperClass):
def __init__(self, additional_attribute, tediously, many, attributes):
self.additional_attribute = additional_attribute
super().__init__(tediously, many, attributes)
#classmethod
def from_super_instance(cls, additional_attribute, super_instance):
return cls(additional_attribute=additional_attribute, **super_instance.__dict__)
super_instance = SuperClass("tediously", "many", "attributes")
sub_instance = SubClass.from_super_instance("additional_attribute", super_instance)
NOTE: Bear in mind that python executes statements sequentially, so if you want to override the value of an inherited attribute, put super().__init__() before the other assignment statements in SubClass.__init__.
NOTE 2: pydantic has this very nice feature where their BaseModel class auto generates an .__init__() method, helps with attribute type validation and offers a .dict() method for such models (it's basically the same as .__dict__ though).
Kinda ran into the same question and just figured one could simply do:
class SubClass(SuperClass):
def __init__(self, additional_attribute, **args):
self.additional_attribute = additional_attribute
super().__init__(**args)
super_class = SuperClass("tediously", "many", "attributes")
sub_instance = SuperClass("additional_attribute", **super_class.__dict__)

Is there a nice way to partially-bind class parameters in Python? [duplicate]

I want to create a class that behaves like collections.defaultdict, without having the usage code specify the factory. EG:
instead of
class Config(collections.defaultdict):
pass
this:
Config = functools.partial(collections.defaultdict, list)
This almost works, but
isinstance(Config(), Config)
fails. I am betting this clue means there are more devious problems deeper in also. So is there a way to actually achieve this?
I also tried:
class Config(Object):
__init__ = functools.partial(collections.defaultdict, list)
I don't think there's a standard method to do it, but if you need it often, you can just put together your own small function:
import functools
import collections
def partialclass(cls, *args, **kwds):
class NewCls(cls):
__init__ = functools.partialmethod(cls.__init__, *args, **kwds)
return NewCls
if __name__ == '__main__':
Config = partialclass(collections.defaultdict, list)
assert isinstance(Config(), Config)
I had a similar problem but also required instances of my partially applied class to be pickle-able. I thought I would share what I ended up with.
I adapted fjarri's answer by peeking at Python's own collections.namedtuple. The below function creates a named subclass that can be pickled.
from functools import partialmethod
import sys
def partialclass(name, cls, *args, **kwds):
new_cls = type(name, (cls,), {
'__init__': partialmethod(cls.__init__, *args, **kwds)
})
# The following is copied nearly ad verbatim from `namedtuple's` source.
"""
# For pickling to work, the __module__ variable needs to be set to the frame
# where the named tuple is created. Bypass this step in enviroments where
# sys._getframe is not defined (Jython for example) or sys._getframe is not
# defined for arguments greater than 0 (IronPython).
"""
try:
new_cls.__module__ = sys._getframe(1).f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
return new_cls
At least in Python 3.8.5 it just works with functools.partial:
import functools
class Test:
def __init__(self, foo):
self.foo = foo
PartialClass = functools.partial(Test, 1)
instance = PartialClass()
instance.foo
If you actually need working explicit type checks via isinstance, you can simply create a not too trivial subclass:
class Config(collections.defaultdict):
def __init__(self): # no arguments here
# call the defaultdict init with the list factory
super(Config, self).__init__(list)
You'll have no-argument construction with the list factory and
isinstance(Config(), Config)
will work as well.
Could use *args and **kwargs:
class Foo:
def __init__(self, a, b):
self.a = a
self.b = b
def printy(self):
print("a:", self.a, ", b:", self.b)
class Bar(Foo):
def __init__(self, *args, **kwargs):
return super().__init__(*args, b=123, **kwargs)
if __name__=="__main__":
bar = Bar(1)
bar.printy() # Prints: "a: 1 , b: 123"

multiple python class inheritance

I am trying to understand python's class inheritance methods and I have some troubles figuring out how to do the following:
How can I inherit a method from a class conditional on the child's input?
I have tried the following code below without much success.
class A(object):
def __init__(self, path):
self.path = path
def something(self):
print("Function %s" % self.path)
class B(object):
def __init__(self, path):
self.path = path
self.c = 'something'
def something(self):
print('%s function with %s' % (self.path, self.c))
class C(A, B):
def __init__(self, path):
# super(C, self).__init__(path)
if path=='A':
A.__init__(self, path)
if path=='B':
B.__init__(self, path)
print('class: %s' % self.path)
if __name__ == '__main__':
C('A')
out = C('B')
out.something()
I get the following output:
class: A
class: B
Function B
While I would like to see:
class: A
class: B
B function with something
I guess the reason why A.something() is used (instead of B.something()) has to do with the python's MRO.
Calling __init__ on either parent class does not change the inheritance structure of your classes, no. You are only changing what initialiser method is run in addition to C.__init__ when an instance is created. C inherits from both A and B, and all methods of B are shadowed by those on A due to the order of inheritance.
If you need to alter class inheritance based on a value in the constructor, create two separate classes, with different structures. Then provide a different callable as the API to create an instance:
class CA(A):
# just inherit __init__, no need to override
class CB(B):
# just inherit __init__, no need to override
def C(path):
# create an instance of a class based on the value of path
class_map = {'A': CA, 'B': CB}
return class_map[path](path)
The user of your API still has name C() to call; C('A') produces an instance of a different class from C('B'), but they both implement the same interface so this doesn't matter to the caller.
If you have to have a common 'C' class to use in isinstance() or issubclass() tests, you could mix one in, and use the __new__ method to override what subclass is returned:
class C:
def __new__(cls, path):
if cls is not C:
# for inherited classes, not C itself
return super().__new__(cls)
class_map = {'A': CA, 'B': CB}
cls = class_map[path]
# this is a subclass of C, so __init__ will be called on it
return cls.__new__(cls, path)
class CA(C, A):
# just inherit __init__, no need to override
pass
class CB(C, B):
# just inherit __init__, no need to override
pass
__new__ is called to construct the new instance object; if the __new__ method returns an instance of the class (or a subclass thereof) then __init__ will automatically be called on that new instance object. This is why C.__new__() returns the result of CA.__new__() or CB.__new__(); __init__ is going to be called for you.
Demo of the latter:
>>> C('A').something()
Function A
>>> C('B').something()
B function with something
>>> isinstance(C('A'), C)
True
>>> isinstance(C('B'), C)
True
>>> isinstance(C('A'), A)
True
>>> isinstance(C('A'), B)
False
If neither of these options are workable for your specific usecase, you'd have to add more routing in a new somemethod() implementation on C, which then calls either A.something(self) or B.something(self) based on self.path. This becomes cumbersome really quickly when you have to do this for every single method, but a decorator could help there:
from functools import wraps
def pathrouted(f):
#wraps
def wrapped(self, *args, **kwargs):
# call the wrapped version first, ignore return value, in case this
# sets self.path or has other side effects
f(self, *args, **kwargs)
# then pick the class from the MRO as named by path, and call the
# original version
cls = next(c for c in type(self).__mro__ if c.__name__ == self.path)
return getattr(cls, f.__name__)(self, *args, **kwargs)
return wrapped
then use that on empty methods on your class:
class C(A, B):
#pathrouted
def __init__(self, path):
self.path = path
# either A.__init__ or B.__init__ will be called next
#pathrouted
def something(self):
pass # doesn't matter, A.something or B.something is called too
This is, however, becoming very unpythonic and ugly.
While Martijn's answer is (as usual) close to perfect, I'd just like to point out that from a design POV, inheritance is the wrong tool here.
Remember that implementation inheritance is actually a static and somehow restricted kind of composition/delegation, so as soon as you want something more dynamic the proper design is to eschew inheritance and go for full composition/delegation, canonical examples being the State and the Strategy patterns. Applied to your example, this might look something like:
class C(object):
def __init__(self, strategy):
self.strategy = strategy
def something(self):
return self.strategy.something(self)
class AStrategy(object):
def something(self, owner):
print("Function A")
class BStrategy(object):
def __init__(self):
self.c = "something"
def something(self, owner):
print("B function with %s" % self.c)
if __name__ == '__main__':
a = C(AStrategy())
a.something()
b = C(BStrategy())
b.something()
Then if you need to allow the user to specify the strategy by name (as string), you can add the factory pattern to the solution
STRATEGIES = {
"A": AStrategy,
"B": BStrategy,
}
def cfactory(strategy_name):
try:
strategy_class = STRATEGIES[strategy_name]
except KeyError:
raise ValueError("'%s' is not a valid strategy" % strategy_name)
return C(strategy_class())
if __name__ == '__main__':
a = cfactory("A")
a.something()
b = cfactory("B")
b.something()
Martijn's answer explained how to choose an object inheriting from one of two classes. Python also allows to easily forward a method to a different class:
>>> class C:
parents = { 'A': A, 'B': B }
def __init__(self, path):
self.parent = C.parents[path]
self.parent.__init__(self, path) # forward object initialization
def something(self):
self.parent.something(self) # forward something method
>>> ca = C('A')
>>> cb = C('B')
>>> ca.something()
Function A
>>> cb.something()
B function with something
>>> ca.path
'A'
>>> cb.path
'B'
>>> cb.c
'something'
>>> ca.c
Traceback (most recent call last):
File "<pyshell#46>", line 1, in <module>
ca.c
AttributeError: 'C' object has no attribute 'c'
>>>
But here class C does not inherit from A or B:
>>> C.__mro__
(<class '__main__.C'>, <class 'object'>)
Below is my original solution using monkey patching:
>>> class C:
parents = { 'A': A, 'B': B }
def __init__(self, path):
parent = C.parents[path]
parent.__init__(self, path) # forward object initialization
self.something = lambda : parent.something(self) # "borrow" something method
it avoids the parent attribute in C class, but is less readable...

How to share same method on two different classes in python

I have class:
class A(object):
def do_computing(self):
print "do_computing"
Then I have:
new_class = type('B', (object,), {'a': '#A', 'b': '#B'})
What I want to achieve is to make all methods and properties on class A a member of class B. Class A can have from 0 to N such elements. I want to make them all a member of class B.
So far I get to:
methods = {}
for el in dir(A):
if el.startswith('_'):
continue
tmp = getattr(A, el)
if isinstance(tmp, property):
methods[el] = tmp
if isinstance(tmp, types.MethodType):
methods[el] = tmp
instance_class = type('B', (object,), {'a': '#A', 'b': '#B'})
for name, func in methods.items():
new_method = types.MethodType(func, None, instance_class)
setattr(instance_class, name, new_method)
But then when I run:
instance().do_computing()
I get an error:
TypeError: unbound method do_computing() must be called with A instance as first argument (got B instance instead)
Why I had to do that? We have a lot of legacy code and I need fancy objects that will pretend they are old objects but really.
One more important thing. I cannot use inheritance, to much magic happens in the background.
If you do it like this, it will work:
import types
class A(object):
def do_computing(self):
print "do_computing"
methods = {name:value for name, value in A.__dict__.iteritems()
if not name.startswith('_')}
instance_class = type('B', (object,), {'a': '#A', 'b': '#B'})
for name, func in methods.iteritems():
new_method = types.MethodType(func, None, instance_class)
setattr(instance_class, name, new_method)
instance_class().do_computing()
Unless I'm missing something, you can do this with inheritance:
class B(A):
def __init__(self):
super(B, self).__init__()
Then:
>>> b = B()
>>> b.do_computing()
do_computing
Edit: cms_mgr said the same in the comments, also fixed indentation
are you creating a facade? maybe you want something like this:
Making a facade in Python 2.5
http://en.wikipedia.org/wiki/Facade_pattern
you could also use delegators. here's an example from the wxpython AGW:
_methods = ["GetIndent", "SetIndent", "GetSpacing", "SetSpacing", "GetImageList", "GetStateImageList",
"GetButtonsImageList", "AssignImageList", "AssignStateImageList", "AssignButtonsImageList",
"SetImageList", "SetButtonsImageList", "SetStateImageList", 'other_methods']
def create_delegator_for(method):
"""
Creates a method that forwards calls to `self._main_win` (an instance of :class:`TreeListMainWindow`).
:param `method`: one method inside the :class:`TreeListMainWindow` local scope.
"""
def delegate(self, *args, **kwargs):
return getattr(self._main_win, method)(*args, **kwargs)
return delegate
# Create methods that delegate to self._main_win. This approach allows for
# overriding these methods in possible subclasses of HyperTreeList
for method in _methods:
setattr(HyperTreeList, method, create_delegator_for(method))
Note that these wrap class methods... i.e both functions take a signature like def func(self, some, other, args) and are intended to be called like self.func(some, args). If you want to delegate a class function to a non-class function, you'll need to modify the delegator.
You can inherit from a parent class as such:
class Awesome():
def method_a():
return "blee"
class Beauty(Awesome):
def __init__(self):
self.x = self.method_a()
b = Beauty()
print(b.x)
>>> "blee"
This was freely typed, but the logic is the same none the less and should work.
You can also do fun things with setattr like so:
#as you can see this class is worthless and is nothing
class blee():
pass
b = blee()
setattr(b, "variable_1", "123456")
print(b.variable_1)
>>> 123456
essentially you can assign any object, method to a class instance with setattr.
EDIT: Just realized that you did use setattr, woops ;)
Hope this helps!

Implementing __neg__ in a generic way for all subclasses in Python

I apologize in advance for the rather long question.
I'm implementing callable objects and would like them to behave somewhat like (mathematical) functions. I have a base class whose __call__ method raises NotImplementedError so users must subclass to define __call__. My question is: how can I define the special method __neg__ in the base class so subclasses immediately have the expected behavior without having the implement __neg__ in each subclass? My sense of the expected behavior is that if f is an instance of (a subclass of) the base class with a properly defined __call__, then -f should be an instance of the same class as f, possessing all the same attributes as f, except for __call__, which should return the negative of f's __call__.
Here's an example of what I mean:
class Base(object):
def __call__(self, *args, **kwargs):
raise NotImplementedError, 'Please subclass'
def __neg__(self):
def call(*args, **kwargs):
return -self(*args, **kwargs)
mBase = type('mBase', (Base,), {'__call__': call})
return mBase()
class One(Base):
def __init__(self data):
self.data = data
def __call__(self, *args, **kwargs):
return 1
This has the expected behavior:
one = One()
print one() # Prints 1
minus_one = -one
print minus_one() # Prints -1
though it's not exactly what I'd like since minus_one is not an instance of the same class as one (but I could live with that).
Now I'd like the new instance minus_one to inherit all attributes and methods of one; only the __call__ method should change. So I could change __neg__ to
def __neg__(self):
def call(*args, **kwargs):
return -self(*args, **kwargs)
mBase = type('mBase', (Base,), {'__call__': call})
new = mBase()
for n, v in inspect.getmembers(self):
if n != '__call__':
setattr(new, n, v)
return new
This seems to work. My question is: are there cons to this strategy? Implementing a generic __neg__ must be a standard exercise but I couldn't find anything on it on the web. Are there recommended alternatives?
Thanks in advance for any comments.
Your approach has several downsides. One example is that you copy all members of the original instance to the new instance -- this won't work if your class overrides any special methods other than __call__, since special methods are only looked up in the dictionary of the object's type when called implicitly. Moreover, it copies a lot of stuff that is actually inherited from object and doesn't need to go in the instance's __dict__.
An easier approach that satisfies your exact requirements is to make the new type a subclass of the instance's original type. This can be done by defining a local class inside the __neg__() method:
def __neg__(self):
class Neg(self.__class__):
def __call__(self_, *args, **kwargs):
return -self(*args, **kwargs)
neg = Base.__new__(Neg)
neg.__dict__ = self.__dict__.copy()
return neg
This defines a new class Neg derived from the original function's type and overwrites its __call__() method. It creates an instance of this class using Base's constructor -- this is to cover the case that self's class would take constructor arguments. finally we copy everything that is directly stored in the instance self to the new instance.
If I were to design the system, I'd take a completely different approach. I'd fix the interface for a function and would only rely on this fixed interface for every function. I wouldn't bother to copy all attributes of an instance to the negated function, but rather do this:
class Function(object):
def __neg__(self):
return NegatedFunction(self)
def __add__(self, other):
return SumFunction(self, other)
class NegatedFunction(Function):
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
return -self.f(*args, **kwargs)
class SumFunction(Function):
def __init__(self, *funcs):
self.funcs = funcs
def __call__(self, *args, **kwargs):
return sum(f(*args, **kwargs) for f in self.funcs)
This approach does not fulfil your requirement that the function returned by __neg__() has all the attributes and methods of the original function, but I think this requirement is rather questionable as far as design is concerned. I think dropping this requirement will give you a much cleaner and more general approach (as demonstrated by including an __add__() operator in the example above).
The basic problem you're running into is that __xxx__ methods are only looked up on the class, which means all instances of the same class will use the same __xxx__ methods. This suggests using a method similar to what Cat Plus Plus suggested; however, you also don't want your users to have to worry about even more special names (such as _call_impl and _negate).
If you don't mind the possibly mind-melting power of metaclasses, that is the route to take. A metaclass can add in the _negate attribute automatically (and name mangle it to avoid clashes), as well as take the __call__ that your user wrote and rename it to _call, then create a new __call__ that calls the old __call__ (now called _call ;) and then negates the result, if necessary, before returning it.
Here's the code:
import copy
import inspect
class MetaFunction(type):
def __new__(metacls, cls_name, cls_bases, cls_dict):
result_class = type.__new__(metacls, cls_name, cls_bases, cls_dict)
if '__call__' in cls_dict:
original_call = cls_dict['__call__']
args, varargs, kwargs, defaults = inspect.getargspec(original_call)
args = args[1:]
if defaults is None:
defaults = [''] * len(args)
else:
defaults = [''] * (len(args) - len(defaults)) + list(defaults)
signature = []
for arg, default in zip(args, defaults):
if default:
signature.append('%s=%s' % (arg, default))
else:
signature.append(arg)
if varargs is not None:
signature.append(varargs)
if kwargs is not None:
signature.append(kwargs)
signature = ', '.join(signature)
passed_args = ', '.join(args)
new_call = (
"""def __call__(self, %(signature)s):
result = self._call(%(passed_args)s)
if self._%(cls_name)s__negate:
result = -result
return result"""
% {
'cls_name':cls_name,
'signature':signature,
'passed_args':passed_args,
})
eval_dict = {}
exec new_call in eval_dict
new_call = eval_dict['__call__']
new_call.__doc__ = original_call.__doc__
new_call.__module__ = original_call.__module__
new_call.__dict__ = original_call.__dict__
setattr(result_class, '__call__', new_call)
setattr(result_class, '_call', original_call)
setattr(result_class, '_%s__negate' % cls_name, False)
negate = """def __neg__(self):
"returns an instance of the same class that returns the negation of __call__"
negated = copy.copy(self)
negated._%(cls_name)s__negate = not self._%(cls_name)s__negate
return negated""" % {'cls_name':cls_name}
eval_dict = {'copy':copy}
exec negate in eval_dict
negate = eval_dict['__neg__']
negate.__module__ = new_call.__module__
setattr(result_class, '__neg__', eval_dict['__neg__'])
return result_class
class Base(object):
__metaclass__ = MetaFunction
class Power(Base):
def __init__(self, power):
"power = the power to raise to"
self.power = power
def __call__(self, number):
"raises number to power"
return number ** self.power
and an example:
--> square = Power(2)
--> neg_square = -square
--> square(9)
81
--> neg_square(9)
-81
While the metaclass code itself can be complex, the resulting objects can be very easy to use. To be fair, most of the code, and the complexity, in MetaFunction is due to re-writing __call__ in order to preserve the call signature and make introspection useful... so instead of seeing __call__(*args, *kwargs) in help, you see this:
Help on Power in module test object:
class Power(Base)
| Method resolution order:
| Power
| Base
| __builtin__.object
|
| Methods defined here:
|
| __call__(self, number)
| raises number to power
|
| __init__(self, power)
| power = the power to raise to
|
| __neg__(self)
| returns an instance of the same class that returns the negation of __call__
Instead of creating new type, you can keep a flag on the instance that says whether call result should be negated or not. And then you can offload the actual overrideable call behaviour to a separate (non-special) method, as part of your own protocol.
class Base(object):
def __init__(self):
self._negate_call = False
def call_impl(self, *args, **kwargs):
raise NotImplementedError
def __call__(self, *args, **kwargs):
result = self.call_impl(*args, **kwargs)
return -result if self._negate_call else result
def __neg__(self):
other = copy.copy(self)
other._negate_call = not other._negate_call
return other

Categories