I'm trying to modify Guido's multimethod (dynamic dispatch code):
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
to handle inheritance and possibly out of order arguments.
e.g. (inheritance problem)
class A(object):
pass
class B(A):
pass
#multimethod(A,A)
def foo(arg1,arg2):
print 'works'
foo(A(),A()) #works
foo(A(),B()) #fails
Is there a better way than iteratively checking for the super() of each item until one is found?
e.g. (argument ordering problem)
I was thinking of this from a collision detection standpoint.
e.g.
foo(Car(),Truck()) and
foo(Truck(), Car()) and
should both trigger
foo(Car,Truck) # Note: #multimethod(Truck,Car) will throw an exception if #multimethod(Car,Truck) was registered first?
I'm looking specifically for an 'elegant' solution. I know that I could just brute force my way through all the possibilities, but I'm trying to avoid that. I just wanted to get some input/ideas before sitting down and pounding out a solution.
Thanks
Regarding the inheritance issue: This can be done with a slight change to MultiMethod. (Iterating through self.typemap and checking with issubclass):
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
for typemap_types in self.typemap:
if all(issubclass(arg_type,known_type)
for arg_type,known_type in zip(types,typemap_types)):
function = self.typemap.get(typemap_types)
return function(*args)
raise TypeError("no match")
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
class A(object):
pass
class B(A):
pass
class C(object):
pass
#multimethod(A,A)
def foo(arg1,arg2):
print 'works'
foo(A(),A()) #works
foo(A(),B()) #works
foo(C(),B()) #raises TypeError
Note that self.typemap is a dict, and dicts are unordered. So if you use #multimethod to register two functions, one whose types are subclasses of the other, then the behavior of foo may be undefined. That is, the result would depend on which typemap_types comes up first in the loop for typemap_types in self.typemap.
super() returns a proxy object, not the parent class (because you can have multiple inheritance), so that wouldn't work. Using isinstance() is your best bet, although there's no way to make it as elegant as the dictionary lookups using type(arg).
I don't think allowing alternative argument orderings is a good idea; it's liable to lead to nasty surprises, and making it compatible with inheritance as well would be a significant headache. However, it would be quite simple to make a second decorator for "use this function if all the arguments are of type A", or "use this function if all the arguments are in types {A, B, E}".
Related
I have following structure for class.
class foo(object):
def __call__(self,param1):
pass
class bar(object):
def __call__(self,param1,param2):
pass
I have many classes of this type. And i am using this callable class as follows.
classes = [foo(), bar()]
for C in classes:
res = C(param1)
'''here i want to put condition if class takes 1 argumnet just pass 1
parameter otherwise pass two.'''
I have think of one pattern like this.
class abc():
def __init__(self):
self.param1 = 'xyz'
self.param2 = 'pqr'
def something(self, classes): # classes = [foo(), bar()]
for C in classes:
if C.__class__.__name__ in ['bar']:
res = C(self.param1, self.param2)
else:
res = C(self.param2)
but in above solution have to maintain list of class which takes two arguments and as i will add more class to file this will become messy.
I dont know whether this is correct(pythonic) way to do it.
On more idea i have in mind is to check how many argument that class is taking. If its 2 then pass an additional argument otherwise pass 1 argument.I have looked at this solution How can I find the number of arguments of a Python function? . But i am not confident enought that this is the best suited solution to my problem.
Few things about this:
There are only two type of classes in my usecase one with 1 argument and one with 2.
Both class takes first argument same so params1 in both case is same argument i am passing. in case of class with two required parameter i am passing additional argument(params2) containing some data.
Ps : Any help or new idea for this problem are appretiated.
UPD : Updated the code.
Basically, you want to use polymorphism on your object's __call__() method, but you have an issue with your callables signature not being the same.
The plain simple answer to this is: you can only use polymorphism on compatible types, which in this case means that your callables MUST have compatible signatures.
Hopefully, there's a quick and easy way to solve this: just modify your methods signatures so they accept varargs and kwargs:
class Foo(object):
def __call__(self,param1, *args, **kw):
pass
class Bar(object):
def __call__(self, param1, param2, *args, **kw):
pass
For the case where you can't change the callable's signature, there's still a workaround: use a lambda as proxy:
def func1(y, z):
pass
def func2(x):
pass
callables = [func1, lambda y, z: func2(y)]
for c in callables:
c(42, 1138)
Note that this last example is actually known as the adapter pattern
Unrelated: this:
if C.__class__.__name__ in ['bar']:
is a inefficient and convoluted way to write:
if C.__class__.__name__ == 'bar':
which is itself an inefficient, convoluted AND brittle way to write:
if type(C) is bar:
which, by itself, is a possible design smell (there are legit use cases for checking the exact type of an object, but most often this is really a design issue).
Ok, here is the real world scenario: I'm writing an application, and I have a class that represents a certain type of files (in my case this is photographs but that detail is irrelevant to the problem). Each instance of the Photograph class should be unique to the photo's filename.
The problem is, when a user tells my application to load a file, I need to be able to identify when files are already loaded, and use the existing instance for that filename, rather than create duplicate instances on the same filename.
To me this seems like a good situation to use memoization, and there's a lot of examples of that out there, but in this case I'm not just memoizing an ordinary function, I need to be memoizing __init__(). This poses a problem, because by the time __init__() gets called it's already too late as there's a new instance created already.
In my research I found Python's __new__() method, and I was actually able to write a working trivial example, but it fell apart when I tried to use it on my real-world objects, and I'm not sure why (the only thing I can think of is that my real world objects were subclasses of other objects that I can't really control, and so there were some incompatibilities with this approach). This is what I had:
class Flub(object):
instances = {}
def __new__(cls, flubid):
try:
self = Flub.instances[flubid]
except KeyError:
self = Flub.instances[flubid] = super(Flub, cls).__new__(cls)
print 'making a new one!'
self.flubid = flubid
print id(self)
return self
#staticmethod
def destroy_all():
for flub in Flub.instances.values():
print 'killing', flub
a = Flub('foo')
b = Flub('foo')
c = Flub('bar')
print a
print b
print c
print a is b, b is c
Flub.destroy_all()
Which output this:
making a new one!
139958663753808
139958663753808
making a new one!
139958663753872
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb090>
True False
killing <__main__.Flub object at 0x7f4aaa6fb050>
killing <__main__.Flub object at 0x7f4aaa6fb090>
It's perfect! Only two instances were made for the two unique id's given, and Flub.instances clearly only has two listed.
But when I tried to take this approach with the objects I was using, I got all kinds of nonsensical errors about how __init__() took only 0 arguments, not 2. So I'd change some things around and then it would tell me that __init__() needed an argument. Totally bizarre.
After a while of fighting with it, I basically just gave up and moved all the __new__() black magic into a staticmethod called get, such that I could call Photograph.get(filename) and it would only call Photograph(filename) if filename wasn't already in Photograph.instances.
Does anybody know where I went wrong here? Is there some better way to do this?
Another way of thinking about it is that it's similar to a singleton, except it's not globally singleton, just singleton-per-filename.
Here's my real-world code using the staticmethod get if you want to see it all together.
Let us see two points about your question.
Using memoize
You can use memoization, but you should decorate the class, not the __init__ method. Suppose we have this memoizator:
def get_id_tuple(f, args, kwargs, mark=object()):
"""
Some quick'n'dirty way to generate a unique key for an specific call.
"""
l = [id(f)]
for arg in args:
l.append(id(arg))
l.append(id(mark))
for k, v in kwargs:
l.append(k)
l.append(id(v))
return tuple(l)
_memoized = {}
def memoize(f):
"""
Some basic memoizer
"""
def memoized(*args, **kwargs):
key = get_id_tuple(f, args, kwargs)
if key not in _memoized:
_memoized[key] = f(*args, **kwargs)
return _memoized[key]
return memoized
Now you just need to decorate the class:
#memoize
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
Let us see a test?
tests = [Test(1), Test(2), Test(3), Test(2), Test(4)]
for test in tests:
print test.somevalue, id(test)
The output is below. Note that the same parameters yield the same id of the returned object:
1 3072319660
2 3072319692
3 3072319724
2 3072319692
4 3072319756
Anyway, I would prefer to create a function to generate the objects and memoize it. Seems cleaner to me, but it may be some irrelevant pet peeve:
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
#memoize
def get_test_from_value(somevalue):
return Test(somevalue)
Using __new__:
Or, of course, you can override __new__. Some days ago I posted an answer about the ins, outs and best practices of overriding __new__ that can be helpful. Basically, it says to always pass *args, **kwargs to your __new__ method.
I, for one, would prefer to memoize a function which creates the objects, or even write a specific function which would take care of never recreating a object to the same parameter. Of course, however, this is mostly a opinion of mine, not a rule.
The solution that I ended up using is this:
class memoize(object):
def __init__(self, cls):
self.cls = cls
self.__dict__.update(cls.__dict__)
# This bit allows staticmethods to work as you would expect.
for attr, val in cls.__dict__.items():
if type(val) is staticmethod:
self.__dict__[attr] = val.__func__
def __call__(self, *args):
key = '//'.join(map(str, args))
if key not in self.cls.instances:
self.cls.instances[key] = self.cls(*args)
return self.cls.instances[key]
And then you decorate the class with this, not __init__. Although brandizzi provided me with that key piece of information, his example decorator didn't function as desired.
I found this concept quite subtle, but basically when you're using decorators in Python, you need to understand that the thing that gets decorated (whether it's a method or a class) is actually replaced by the decorator itself. So for example when I'd try to access Photograph.instances or Camera.generate_id() (a staticmethod), I couldn't actually access them because Photograph doesn't actually refer to the original Photograph class, it refers to the memoized function (from brandizzi's example).
To get around this, I had to create a decorator class that actually took all the attributes and static methods from the decorated class and exposed them as it's own. Almost like a subclass, except that the decorator class doesn't know ahead of time what classes it will be decorating, so it has to copy the attributes over after the fact.
The end result is that any instance of the memoize class becomes an almost transparent wrapper around the actual class that it has decorated, with the exception that attempting to instantiate it (but really calling it) will provide you with cached copies when they're available.
The parameters to __new__ also get passed to __init__, so:
def __init__(self, flubid):
...
You need to accept the flubid argument there, even if you don't use it in __init__
Here is the relevant comment taken from typeobject.c in Python2.7.3
/* You may wonder why object.__new__() only complains about arguments
when object.__init__() is not overridden, and vice versa.
Consider the use cases:
1. When neither is overridden, we want to hear complaints about
excess (i.e., any) arguments, since their presence could
indicate there's a bug.
2. When defining an Immutable type, we are likely to override only
__new__(), since __init__() is called too late to initialize an
Immutable object. Since __new__() defines the signature for the
type, it would be a pain to have to override __init__() just to
stop it from complaining about excess arguments.
3. When defining a Mutable type, we are likely to override only
__init__(). So here the converse reasoning applies: we don't
want to have to override __new__() just to stop it from
complaining.
4. When __init__() is overridden, and the subclass __init__() calls
object.__init__(), the latter should complain about excess
arguments; ditto for __new__().
Use cases 2 and 3 make it unattractive to unconditionally check for
excess arguments. The best solution that addresses all four use
cases is as follows: __init__() complains about excess arguments
unless __new__() is overridden and __init__() is not overridden
(IOW, if __init__() is overridden or __new__() is not overridden);
symmetrically, __new__() complains about excess arguments unless
__init__() is overridden and __new__() is not overridden
(IOW, if __new__() is overridden or __init__() is not overridden).
However, for backwards compatibility, this breaks too much code.
Therefore, in 2.6, we'll *warn* about excess arguments when both
methods are overridden; for all other cases we'll use the above
rules.
*/
Was trying to figure this out as well and I put together a solution that combines some tips from other StackOverflow questions (links in the code comments).
If anyone still needs, try this out:
import functools
from collections import OrderedDict
def memoize(f):
class Memoized:
def __init__(self, func):
self._f = func
self._cache = {}
# Make the Memoized class masquerade as the object we are memoizing.
# Preserve class attributes
functools.update_wrapper(self, func)
# Preserve static methods
# From https://stackoverflow.com/questions/11174362
for k, v in func.__dict__.items():
self.__dict__[k] = v.__func__ if type(v) is staticmethod else v
def __call__(self, *args, **kwargs):
# Generate key
key = (args)
if kwargs:
key += (object())
for k, v in kwargs.items():
key += (hash(k))
key += (hash(v))
key = hash(key)
if key in self._cache:
return self._cache[key]
else:
self._cache[key] = self._f(*args, **kwargs)
return self._cache[key]
def __get__(self, instance, owner):
"""
From https://stackoverflow.com/questions/30104047/how-can-i-decorate-an-instance-method-with-a-decorator-class
"""
return functools.partial(self.__call__, instance)
def __instancecheck__(self, other):
"""Make isinstance() work"""
return isinstance(other, self._f)
return Memoized(f)
Then you can use like so:
#memoize
class Test:
def __init__(self, value):
self._value = value
#property
def value(self):
return self._value
Uploaded the full thing with documentation to: https://github.com/spoorn/nemoize
I have two classes (let's call them Working and ReturnStatement) which I can't modify, but I want to extend both of them with logging. The trick is that the Working's method returns a ReturnStatement object, so the new MutantWorking object also returns ReturnStatement unless I can cast it to MutantReturnStatement. Saying with code:
# these classes can't be changed
class ReturnStatement(object):
def act(self):
print "I'm a ReturnStatement."
class Working(object):
def do(self):
print "I am Working."
return ReturnStatement()
# these classes should wrap the original ones
class MutantReturnStatement(ReturnStatement):
def act(self):
print "I'm wrapping ReturnStatement."
return ReturnStatement().act()
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
# !!! this is not working, I'd need that casting working !!!
return (MutantReturnStatement) Working().do()
rs = MutantWorking().do() #I can use MutantWorking just like Working
print "--" # just to separate output
rs.act() #this must be MutantReturnState.act(), I need the overloaded method
The expected result:
I am wrapping Working.
I am Working.
--
I'm wrapping ReturnStatement.
I'm a ReturnStatement.
Is it possible to solve the problem? I'm also curious if the problem can be solved in PHP, too. Unless I get a working solution I can't accept the answer, so please write working code to get accepted.
There is no casting as the other answers already explained. You can make subclasses or make modified new types with the extra functionality using decorators.
Here's a complete example (credit to How to make a chain of function decorators?). You do not need to modify your original classes. In my example the original class is called Working.
# decorator for logging
def logging(func):
def wrapper(*args, **kwargs):
print func.__name__, args, kwargs
res = func(*args, **kwargs)
return res
return wrapper
# this is some example class you do not want to/can not modify
class Working:
def Do(c):
print("I am working")
def pr(c,printit): # other example method
print(printit)
def bla(c): # other example method
c.pr("saybla")
# this is how to make a new class with some methods logged:
class MutantWorking(Working):
pr=logging(Working.pr)
bla=logging(Working.bla)
Do=logging(Working.Do)
h=MutantWorking()
h.bla()
h.pr("Working")
h.Do()
this will print
h.bla()
bla (<__main__.MutantWorking instance at 0xb776b78c>,) {}
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'saybla') {}
saybla
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'Working') {}
Working
Do (<__main__.MutantWorking instance at 0xb776b78c>,) {}
I am working
In addition, I would like to understand why you can not modify a class. Did you try? Because, as an alternative to making a subclass, if you feel dynamic you can almost always modify an old class in place:
Working.Do=logging(Working.Do)
ReturnStatement.Act=logging(ReturnStatement.Act)
Update: Apply logging to all methods of a class
As you now specifically asked for this. You can loop over all members and apply logging to them all. But you need to define a rule for what kind of members to modify. The example below excludes any method with __ in its name .
import types
def hasmethod(obj, name):
return hasattr(obj, name) and type(getattr(obj, name)) == types.MethodType
def loggify(theclass):
for x in filter(lambda x:"__" not in x, dir(theclass)):
if hasmethod(theclass,x):
print(x)
setattr(theclass,x,logging(getattr(theclass,x)))
return theclass
With this all you have to do to make a new logged version of a class is:
#loggify
class loggedWorker(Working): pass
Or modify an existing class in place:
loggify(Working)
There is no "casting" in Python.
Any subclass of a class is considered an instance of its parents. Desired behavior can be achieved by proper calling the superclass methods, and by overriding class attributes.
update: with the advent of static type checking, there is "type casting" - check bellow.
What you can do on your example, is to have to have a subclass initializer that receives the superclass and copies its relevant attributes - so, your MutantReturnstatement could be written thus:
class MutantReturnStatement(ReturnStatement):
def __init__(self, previous_object=None):
if previous_object:
self.attribute = previous_object.attribute
# repeat for relevant attributes
def act(self):
print "I'm wrapping ReturnStatement."
return ReturnStatement().act()
And then change your MutantWorking class to:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
return MutantReturnStatement(Working().do())
There are Pythonic ways for not having a lot of self.attr = other.attr lines on the __init__method if there are lots (like, more than 3 :-) ) attributes you want to copy -
the laziest of which wiuld be simply to copy the other instance's __dict__ attribute.
Alternatively, if you know what you are doing, you could also simply change the __class__ attribute of your target object to the desired class - but that can be misleading and carry you to subtle errors (the __init__ method of the subclass would not be called, would not work on non-python defined classes, and other possible problems), I don't recomment this approach - this is not "casting", it is use of introspection to bruteforce an object change and is only included for keeping the answer complete:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
result = Working.do(self)
result.__class__ = MutantReturnStatement
return result
Again - this should work, but don't do it - use the former method.
By the way, I am not too experienced with other OO languages, that allow casting - but is casting to a subclass even allowed in any language? Does it make sense? I think casting s only allowed to parentclasses.
update: When one works with type hinting and static analysis in the ways describd in PEP 484, sometimes the static analysis tool can't figure out what is going on. So, there is the typing.cast call: it does absolutely nothing in runtime, just return the same object that was passed to it, but the tools then "learn" that the returned object is of the passed type, and won't complain about it. It will remove typing errors in the helper tool, but I can't emphasise enough it does not have any effect in runtime:
In [18]: from typing import cast
In [19]: cast(int, 3.4)
Out[19]: 3.4
No direct way.
You may define MutantReturnStatement's init like this:
def __init__(self, retStatement):
self.retStatement = retStatement
and then use it like this:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
# !!! this is not working, I'd need that casting working !!!
return MutantReturnStatement(Working().do())
And you should get rid from inheriting ReturnStatement in your wrapper, like this
class MutantReturnStatement(object):
def act(self):
print "I'm wrapping ReturnStatement."
return self.retStatement.act()
You don't need casting here. You just need
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
Working().do()
return MutantReturnStatement()
This will obviously give the correct return and desired printout.
What you do is not a casting, it is a type conversion. Still, you could write something like
def cast_to(mytype: Type[any], obj: any):
if isinstance(obj, mytype):
return obj
else:
return mytype(obj)
class MutantReturnStatement(ReturnStatement):
def __init__(self, *args, **kwargs):
if isinstance(args[0], Working):
pass
# your custom logic here
# for the type conversion.
Usage:
cast_to(MutantReturnStatement, Working()).act()
# or simply
MutantReturnStatement(Working()).act()
(Note that in your example MutantReturnStatement does not have .do() member function.)
I wonder if there is a reasonable easy way to allow for this code (with minor modifications) to work.
class Info(object):
#attr("Version")
def version(self):
return 3
info = Info()
assert info.version == 3
assert info["Version"] == 3
Ideally, the code would do some caching/memoising as well, e.g. employ lazy attributes, but I hope to figure that out myself.
Additional information:
The reason why I want provide two interfaces for accessing the same information is as follows.
I’d like to have a dict-like class which uses lazy keys. E.g. info["Version"] should call and cache another method and transparently return the result.
I don’t think that works with dicts alone, therefore I need to create new methods.
Methods alone won’t do either, because there are some attributes which are easier to define with pure dictionary syntax.
It probably is not the best idea anyway…
If the attribute name (version) is always a lowercase version of the dict key ("Version"), then you could set it up this way:
class Info(object):
#property
def version(self):
return 3
def __getitem__(self,key):
if hasattr(self,key.lower()):
return getattr(self,key.lower())
If you wish the dict key to be arbitrary, then its still possible, though more complicated:
def attrcls(cls):
cls._attrdict={}
for methodname in cls.__dict__:
method=cls.__dict__[methodname]
if hasattr(method,'_attr'):
cls._attrdict[getattr(method,'_attr')]=methodname
return cls
def attr(key):
def wrapper(func):
class Property(object):
def __get__(self,inst,instcls):
return func(inst)
def __init__(self):
self._attr=key
return Property()
return wrapper
#attrcls
class Info(object):
#attr("Version")
def version(self):
return 3
def __getitem__(self,key):
if key in self._attrdict:
return getattr(self,self._attrdict[key])
I guess the larger question is, Is it a good interface? Why provide two syntaxes (with two different names) for the same thing?
Not trivially. You could use a metaclass to detect decorated methods and wrap __*attr__() and __*item__() appropriately.
Many times I have member functions that copy parameters into object's fields. For Example:
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
self.car = car
self.mansion = mansion
self.jet = jet
self.bling = bling
Is there a python language construct that would make the above code less tedious?
One could use *args:
def __init__(self, *args):
self.car, self.mansion, self.jet, self.bling = args
+: less tedious
-: function signature not revealing enough. need to dive into function code to know how to use function
-: does not raise a TypeError on call with wrong # of parameters (but does raise a ValueError)
Any other ideas? (Whatever your suggestion, make sure the code calling the function does stays simple)
You could do this with a helper method, something like this:
import inspect
def setargs(func):
f = inspect.currentframe(1)
argspec = inspect.getargspec(func)
for arg in argspec.args:
setattr(f.f_locals["self"], arg, f.f_locals[arg])
Usage:
class Foo(object):
def __init__(self, bar, baz=4711):
setargs(self.__init__)
print self.bar # Now defined
print self.baz # Now defined
This is not pretty, and it should probably only be used when prototyping. Please use explicit assignment if you plan to have others read it.
It could probably be improved not to need to take the function as an argument, but that would require even more ugly hacks and trickery :)
I would go for this, also you could override already defined properties.
class D:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
But i personally would just go the long way.
Think of those:
- Explicit is better than implicit.
- Flat is better than nested.
(The Zen of Python)
Try something like
d = dict(locals())
del d['self']
self.__dict__.update(d)
Of course, it returns all local variables, not just function arguments.
I am not sure this is such a good idea, but it can be done:
import inspect
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
arguments = inspect.getargvalues(frame)[0]
values = inspect.getargvalues(frame)[3];
for name in arguments:
self.__dict__[name] = values[name]
It does not read great either, though I suppose you could put this in a utility method that is reused.
You could try something like this:
class C(object):
def __init__(self, **kwargs):
for k in kwargs:
d = {k: kwargs[k]}
self.__dict__.update(d)
Or using setattr you can do:
class D(object):
def __init__(self, **kwargs):
for k in kwargs:
setattr(self, k, kwargs[k])
Both can then be called like:
myclass = C(test=1, test2=2)
So you have to use **kwargs, rather than *args.
I sometimes do this for classes that act "bunch-like", that is, they have a bunch of customizable attributes:
class SuperClass(object):
def __init__(self, **kw):
for name, value in kw.iteritems():
if not hasattr(self, name):
raise TypeError('Unexpected argument: %s' % name)
setattr(self, name, value)
class SubClass(SuperClass):
instance_var = None # default value
class SubClass2(SubClass):
other_instance_var = True
#property
def something_dynamic(self):
return self._internal_var
#something_dynamic.setter # new Python 2.6 feature of properties
def something_dynamic(self, value):
assert value is None or isinstance(value, str)
self._internal_var = value
Then you can call SubClass2(instance_var=[], other_instance_var=False) and it'll work without defining __init__ in either of them. You can use any property as well. Though this allows you to overwrite methods, which you probably wouldn't intend (as they return True for hasattr() just like an instance variable).
If you add any property or other other descriptor it will work fine. You can use that to do type checking; unlike type checking in __init__ it'll be applied any time that value is updated. Note you can't use any positional arguments for these unless you override __init__, so sometimes what would be a natural positional argument won't work. formencode.declarative covers this and other issues, probably with a thoroughness I would not suggest you attempt (in retrospect I don't think it's worth it).
Note that any recipe that uses self.__dict__ won't respect properties and descriptors, and if you use those together you'll just get weird and unexpected results. I only recommend using setattr() to set attributes, never self.__dict__.
Also this recipe doesn't give a very helpful signature, while some of the ones that do frame and function introspection do. With some work it is possible to dynamically generate a __doc__ that clarifies the arguments... but again I'm not sure the payoff is worth the addition of more moving parts.
I am a fan of the following
import inspect
def args_to_attrs(otherself):
frame = inspect.currentframe(1)
argvalues = inspect.getargvalues(frame)
for arg in argvalues.args:
if arg == 'self':
continue
value = argvalues.locals[arg]
setattr(otherself, arg, value)
class MyClass:
def __init__(self, arga="baf", argb="lek", argc=None):
args_to_attrs(self)
Arguments to __init__ are explicitly named, so it is clear what attributes are being set. Additionally, it is a little bit streamlined over the currently accepted answer.