Like in this question, except I want to be able to have querysets that return a mixed body of objects:
>>> Product.objects.all()
[<SimpleProduct: ...>, <OtherProduct: ...>, <BlueProduct: ...>, ...]
I figured out that I can't just set Product.Meta.abstract to true or otherwise just OR together querysets of differing objects. Fine, but these are all subclasses of a common class, so if I leave their superclass as non-abstract I should be happy, so long as I can get its manager to return objects of the proper class. The query code in django does its thing, and just makes calls to Product(). Sounds easy enough, except it blows up when I override Product.__new__, I'm guessing because of the __metaclass__ in Model... Here's non-django code that behaves pretty much how I want it:
class Top(object):
_counter = 0
def __init__(self, arg):
Top._counter += 1
print "Top#__init__(%s) called %d times" % (arg, Top._counter)
class A(Top):
def __new__(cls, *args, **kwargs):
if cls is A and len(args) > 0:
if args[0] is B.fav:
return B(*args, **kwargs)
elif args[0] is C.fav:
return C(*args, **kwargs)
else:
print "PRETENDING TO BE ABSTRACT"
return None # or raise?
else:
return super(A).__new__(cls, *args, **kwargs)
class B(A):
fav = 1
class C(A):
fav = 2
A(0) # => None
A(1) # => <B object>
A(2) # => <C object>
But that fails if I inherit from django.db.models.Model instead of object:
File "/home/martin/beehive/apps/hello_world/models.py", line 50, in <module>
A(0)
TypeError: unbound method __new__() must be called with A instance as first argument (got ModelBase instance instead)
Which is a notably crappy backtrace; I can't step into the frame of my __new__ code in the debugger, either. I have variously tried super(A, cls), Top, super(A, A), and all of the above in combination with passing cls in as the first argument to __new__, all to no avail. Why is this kicking me so hard? Do I have to figure out django's metaclasses to be able to fix this or is there a better way to accomplish my ends?
Basically what you're trying to do is to return the different child classes, while querying a shared base class. That is: you want the leaf classes. Check this snippet for a solution: http://www.djangosnippets.org/snippets/1034/
Also be sure to check out the docs on Django's Contenttypes framework: http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/ It can be a bit confusing at first, but Contenttypes will solve additional problems you'll probably face when using non-abstract base classes with Django's ORM.
You want one of these:
http://code.google.com/p/django-polymorphic-models/
https://github.com/bconstantin/django_polymorphic
There are downsides, namely extra queries.
Okay, this works: https://gist.github.com/348872
The tricky bit was this.
class A(Top):
pass
def newA(cls, *args, **kwargs):
# [all that code you wrote for A.__new__]
A.__new__ = staticmethod(newA)
Now, there's something about how Python binds __new__ that I maybe don't quite understand, but the gist of it is this: django's ModelBase metaclass creates a new class object, rather than using the one that's passed in to its __new__; call that A_prime. Then it sticks all the attributes you had in the class definition for A on to A_prime, but __new__ doesn't get re-bound correctly.
Then when you evaluate A(1), A is actually A_prime here, python calls <A.__new__>(A_prime, 1), which doesn't match up, and it explodes.
So the solution is to define your __new__ after A_prime has been defined.
Maybe this is a bug in django.db.models.base.ModelBase.add_to_class, maybe it's a bug in Python, I don't know.
Now, when I said "this works" earlier, I meant this works in isolation with the minimal object construction test case in the current SVN version of Django. I don't know if it actually works as a Model or is useful in a QuerySet. If you actually use this in production code, I will make a public lightning talk out of it for pdxpython and have them mock you until you buy us all gluten-free pizza.
Simply stick #staticmethod before the __new__ method.
#staticmethod
def __new__(cls, *args, **kwargs):
print args, kwargs
return super(License, cls).__new__(cls, *args, **kwargs)
Another approach that I recently found: http://jeffelmore.org/2010/11/11/automatic-downcasting-of-inherited-models-in-django/
Related
So I would like to update a class that is in a library, but I have no control over that class (I can't touch the original source code). Constraint number 2: other users have already inherited this parent class, and asking them to inherit from a third class would be a bit "annoying". So I have to work with both constraints at the same time: needing to extend the parent class, but not by inheritng it.
One solution seemed to make more sense at first, although it's bordering on "monkey-patching". Overriding some methods of the parent class by my own. I wrote a little decorator that could do that. But I met with some error, and rather than giving you the ENTIRE code, here is an example. Consider that the following class, named Old here (the parent class), is in a library I can't touch (regarding its source code, anyway):
class Old(object):
def __init__(self, value):
self.value = value
def at_disp(self, other):
print "Value is", self.value, other
return self.value
That's a simple class, a constructor and a method with a parameter (to test a bit more). Nothing really hard so far. But here comes my decorator to extend a method of this class:
def override_hook(typeclass, method_name):
hook = getattr(typeclass, method_name)
def wrapper(method):
def overriden_hook(*args, **kwargs):
print "Before the hook is called."
kwargs["hook"] = hook
ret = method(*args, **kwargs)
print "After the hook"
return ret
setattr(typeclass, method_name, overriden_hook)
return overriden_hook
return wrapper
#override_hook(Old, "at_disp")
def new_disp(self, other, hook):
print "In the new hook, before"
ret = hook(self, other)
print "In the new hook, after"
return ret
Surprisingly, this works perfectly. If you create an Old instance, and call its at_disp method, the new method is called (and call the old one). Much like hidden inheritance. But here is the real challenge:
We'll try to have a class inheriting from Old. That's what users have done. My "patch" should apply to them too, without needing for them to do anything:
class New(Old):
def at_disp(self, other):
print "That's in New..."
return super(Old, self).at_disp(self, other)
If you create a New object, and try its at_disp method... it crashes. super() cannot find at_disp in Old. Which is odd, because New directly inherits from Old. My guess is that, since my new, replaced method is unbound, super() doesn't find it properly. If you replace super() by a direct call to Old.at_disp(), everything works.
Does somebody know how to fix this issue? And why it happens?
Thanks very much!
Two problems.
First, the call to super should be super(New, self), not super(Old, self). The first argument to super is generally the "current" class (i.e., the class whose method is calling super).
Second, the call to the at_disp method should just be at_disp(other), not at_disp(self, other). When you use the two-argument form of super, you get a bound super object that acts like an instance, so self will automatically be passed if you call a method on it.
So the call should be super(New, self).at_disp(other). Then it works.
I have code in which all objects descend from a base object, which I don't plan to instantiate directly. In the __init__() method of my base object I'm trying to perform some magic -- I am trying to decorate, or wrap, every method of the object being initialized. But I'm getting a result that puzzles me when I call the resulting methods. Here is example code that isolates the problem:
class ParentObject(object):
def __init__(self):
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
setattr(cls, attr, self._smile_warmly(val))
bases = cls.__bases__
for base in bases:
if base.__name__ != 'object':
self._adjust_methods(base)
def _smile_warmly(self, the_method):
def _wrapped(cls, *args, **kwargs):
print "\n-smile_warmly - " +cls.__name__
the_method(self, *args, **kwargs)
cmethod_wrapped = classmethod(_wrapped)
return cmethod_wrapped
class SonObject(ParentObject):
def hello_son(self):
print "hello son"
def get_sister(self):
sis = DaughterObject()
print type(sis)
return sis
class DaughterObject(ParentObject):
def hello_daughter(self):
print "hello daughter"
def get_brother(self):
bro = SonObject()
print type(bro)
return bro
if __name__ == '__main__':
son = SonObject()
son.hello_son()
daughter = DaughterObject()
daughter.hello_daughter()
sis = son.get_sister()
print type(sis)
sis.hello_daughter()
bro = sis.get_brother()
print type(bro)
bro.hello_son()
The program crashes, however -- the line sis = son.get_sister() results in the sis object having a type of NoneType. Here is the output:
-smile_warmly - SonObject
hello son
-smile_warmly - DaughterObject
hello daughter
-smile_warmly - SonObject
<class '__main__.DaughterObject'>
<type 'NoneType'>
Traceback (most recent call last):
File "metaclass_decoration_test.py", line 48, in <module>
sis.hello_daughter()
AttributeError: 'NoneType' object has no attribute 'hello_daughter'
Why is this happening?
Try changing:
def _wrapped(cls, *args, **kwargs):
print "\n-smile_warmly - " +cls.__name__
the_method(self, *args, **kwargs)
to
def _wrapped(cls, *args, **kwargs):
print "\n-smile_warmly - " +cls.__name__
return the_method(self, *args, **kwargs)
Your _wrapped method is calling the method that is being wrapped, but not returning that method's return value.
Well, I don't really want to even touch the craziness that is going on in this code, but your error specifically is because your "decorator" is not returning anything from the wrapped function:
def _smile_warmly(self, the_method):
def _wrapped(cls, *args, **kwargs):
print "\n-smile_warmly - " +cls.__name__
return the_method(self, *args, **kwargs) # return here
cmethod_wrapped = classmethod(_wrapped)
return cmethod_wrapped
The problem is that you are wrapping all methods of your classes, including get_sister. You could do as #Paul McGuire suggests and add the return to the wrapper, but that will mean that the "smile" message is printed when you call son.get_sister, which probably isn't what you want.
What you probably need to do instead is add some logic inside _adjust_methods to decide precisely which methods to wrap. Instead of just checking for callable and not startswith('_'), you could have some naming convention for ones you do or don't want to wrap with smile behavior. However, the more you do this, the less the automatic decoration will benefit you as compared to just manually decorating the methods you want to decorate. It's a little hard to understand why you want to use the structure you apparently want to use (all classmethods, wrapping everything, etc.). Perhaps if you explained what your ultimate goal is here someone could suggest a more straightforward design.
Moreover, even if you add the return or the extra logic for wrapping, you'll still have the problem I mentioned in your other question: since you do the wrapping in __init__, it is going to happen every time you instantiate a class, so you will keep adding more and more wrappers. This is why I suggested there that you should use a class decorator, or, if you must, a metaclass. Messing with class attributes (including methods) in __init__ is not a good idea because they'll get messed with over and over, once for each instance you create.
The missing return in #PaulMcGuire's reply is the cause of the bug.
On a higher level, it looks like you're trying to do via inheritance what might more "commonly" (this is hardly a common approach) be done via a metaclass. Maybe something like this discussion of metaclasses would point you in a slightly more manageable direction.
Ok, here is the real world scenario: I'm writing an application, and I have a class that represents a certain type of files (in my case this is photographs but that detail is irrelevant to the problem). Each instance of the Photograph class should be unique to the photo's filename.
The problem is, when a user tells my application to load a file, I need to be able to identify when files are already loaded, and use the existing instance for that filename, rather than create duplicate instances on the same filename.
To me this seems like a good situation to use memoization, and there's a lot of examples of that out there, but in this case I'm not just memoizing an ordinary function, I need to be memoizing __init__(). This poses a problem, because by the time __init__() gets called it's already too late as there's a new instance created already.
In my research I found Python's __new__() method, and I was actually able to write a working trivial example, but it fell apart when I tried to use it on my real-world objects, and I'm not sure why (the only thing I can think of is that my real world objects were subclasses of other objects that I can't really control, and so there were some incompatibilities with this approach). This is what I had:
class Flub(object):
instances = {}
def __new__(cls, flubid):
try:
self = Flub.instances[flubid]
except KeyError:
self = Flub.instances[flubid] = super(Flub, cls).__new__(cls)
print 'making a new one!'
self.flubid = flubid
print id(self)
return self
#staticmethod
def destroy_all():
for flub in Flub.instances.values():
print 'killing', flub
a = Flub('foo')
b = Flub('foo')
c = Flub('bar')
print a
print b
print c
print a is b, b is c
Flub.destroy_all()
Which output this:
making a new one!
139958663753808
139958663753808
making a new one!
139958663753872
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb090>
True False
killing <__main__.Flub object at 0x7f4aaa6fb050>
killing <__main__.Flub object at 0x7f4aaa6fb090>
It's perfect! Only two instances were made for the two unique id's given, and Flub.instances clearly only has two listed.
But when I tried to take this approach with the objects I was using, I got all kinds of nonsensical errors about how __init__() took only 0 arguments, not 2. So I'd change some things around and then it would tell me that __init__() needed an argument. Totally bizarre.
After a while of fighting with it, I basically just gave up and moved all the __new__() black magic into a staticmethod called get, such that I could call Photograph.get(filename) and it would only call Photograph(filename) if filename wasn't already in Photograph.instances.
Does anybody know where I went wrong here? Is there some better way to do this?
Another way of thinking about it is that it's similar to a singleton, except it's not globally singleton, just singleton-per-filename.
Here's my real-world code using the staticmethod get if you want to see it all together.
Let us see two points about your question.
Using memoize
You can use memoization, but you should decorate the class, not the __init__ method. Suppose we have this memoizator:
def get_id_tuple(f, args, kwargs, mark=object()):
"""
Some quick'n'dirty way to generate a unique key for an specific call.
"""
l = [id(f)]
for arg in args:
l.append(id(arg))
l.append(id(mark))
for k, v in kwargs:
l.append(k)
l.append(id(v))
return tuple(l)
_memoized = {}
def memoize(f):
"""
Some basic memoizer
"""
def memoized(*args, **kwargs):
key = get_id_tuple(f, args, kwargs)
if key not in _memoized:
_memoized[key] = f(*args, **kwargs)
return _memoized[key]
return memoized
Now you just need to decorate the class:
#memoize
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
Let us see a test?
tests = [Test(1), Test(2), Test(3), Test(2), Test(4)]
for test in tests:
print test.somevalue, id(test)
The output is below. Note that the same parameters yield the same id of the returned object:
1 3072319660
2 3072319692
3 3072319724
2 3072319692
4 3072319756
Anyway, I would prefer to create a function to generate the objects and memoize it. Seems cleaner to me, but it may be some irrelevant pet peeve:
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
#memoize
def get_test_from_value(somevalue):
return Test(somevalue)
Using __new__:
Or, of course, you can override __new__. Some days ago I posted an answer about the ins, outs and best practices of overriding __new__ that can be helpful. Basically, it says to always pass *args, **kwargs to your __new__ method.
I, for one, would prefer to memoize a function which creates the objects, or even write a specific function which would take care of never recreating a object to the same parameter. Of course, however, this is mostly a opinion of mine, not a rule.
The solution that I ended up using is this:
class memoize(object):
def __init__(self, cls):
self.cls = cls
self.__dict__.update(cls.__dict__)
# This bit allows staticmethods to work as you would expect.
for attr, val in cls.__dict__.items():
if type(val) is staticmethod:
self.__dict__[attr] = val.__func__
def __call__(self, *args):
key = '//'.join(map(str, args))
if key not in self.cls.instances:
self.cls.instances[key] = self.cls(*args)
return self.cls.instances[key]
And then you decorate the class with this, not __init__. Although brandizzi provided me with that key piece of information, his example decorator didn't function as desired.
I found this concept quite subtle, but basically when you're using decorators in Python, you need to understand that the thing that gets decorated (whether it's a method or a class) is actually replaced by the decorator itself. So for example when I'd try to access Photograph.instances or Camera.generate_id() (a staticmethod), I couldn't actually access them because Photograph doesn't actually refer to the original Photograph class, it refers to the memoized function (from brandizzi's example).
To get around this, I had to create a decorator class that actually took all the attributes and static methods from the decorated class and exposed them as it's own. Almost like a subclass, except that the decorator class doesn't know ahead of time what classes it will be decorating, so it has to copy the attributes over after the fact.
The end result is that any instance of the memoize class becomes an almost transparent wrapper around the actual class that it has decorated, with the exception that attempting to instantiate it (but really calling it) will provide you with cached copies when they're available.
The parameters to __new__ also get passed to __init__, so:
def __init__(self, flubid):
...
You need to accept the flubid argument there, even if you don't use it in __init__
Here is the relevant comment taken from typeobject.c in Python2.7.3
/* You may wonder why object.__new__() only complains about arguments
when object.__init__() is not overridden, and vice versa.
Consider the use cases:
1. When neither is overridden, we want to hear complaints about
excess (i.e., any) arguments, since their presence could
indicate there's a bug.
2. When defining an Immutable type, we are likely to override only
__new__(), since __init__() is called too late to initialize an
Immutable object. Since __new__() defines the signature for the
type, it would be a pain to have to override __init__() just to
stop it from complaining about excess arguments.
3. When defining a Mutable type, we are likely to override only
__init__(). So here the converse reasoning applies: we don't
want to have to override __new__() just to stop it from
complaining.
4. When __init__() is overridden, and the subclass __init__() calls
object.__init__(), the latter should complain about excess
arguments; ditto for __new__().
Use cases 2 and 3 make it unattractive to unconditionally check for
excess arguments. The best solution that addresses all four use
cases is as follows: __init__() complains about excess arguments
unless __new__() is overridden and __init__() is not overridden
(IOW, if __init__() is overridden or __new__() is not overridden);
symmetrically, __new__() complains about excess arguments unless
__init__() is overridden and __new__() is not overridden
(IOW, if __new__() is overridden or __init__() is not overridden).
However, for backwards compatibility, this breaks too much code.
Therefore, in 2.6, we'll *warn* about excess arguments when both
methods are overridden; for all other cases we'll use the above
rules.
*/
Was trying to figure this out as well and I put together a solution that combines some tips from other StackOverflow questions (links in the code comments).
If anyone still needs, try this out:
import functools
from collections import OrderedDict
def memoize(f):
class Memoized:
def __init__(self, func):
self._f = func
self._cache = {}
# Make the Memoized class masquerade as the object we are memoizing.
# Preserve class attributes
functools.update_wrapper(self, func)
# Preserve static methods
# From https://stackoverflow.com/questions/11174362
for k, v in func.__dict__.items():
self.__dict__[k] = v.__func__ if type(v) is staticmethod else v
def __call__(self, *args, **kwargs):
# Generate key
key = (args)
if kwargs:
key += (object())
for k, v in kwargs.items():
key += (hash(k))
key += (hash(v))
key = hash(key)
if key in self._cache:
return self._cache[key]
else:
self._cache[key] = self._f(*args, **kwargs)
return self._cache[key]
def __get__(self, instance, owner):
"""
From https://stackoverflow.com/questions/30104047/how-can-i-decorate-an-instance-method-with-a-decorator-class
"""
return functools.partial(self.__call__, instance)
def __instancecheck__(self, other):
"""Make isinstance() work"""
return isinstance(other, self._f)
return Memoized(f)
Then you can use like so:
#memoize
class Test:
def __init__(self, value):
self._value = value
#property
def value(self):
return self._value
Uploaded the full thing with documentation to: https://github.com/spoorn/nemoize
I have two classes (let's call them Working and ReturnStatement) which I can't modify, but I want to extend both of them with logging. The trick is that the Working's method returns a ReturnStatement object, so the new MutantWorking object also returns ReturnStatement unless I can cast it to MutantReturnStatement. Saying with code:
# these classes can't be changed
class ReturnStatement(object):
def act(self):
print "I'm a ReturnStatement."
class Working(object):
def do(self):
print "I am Working."
return ReturnStatement()
# these classes should wrap the original ones
class MutantReturnStatement(ReturnStatement):
def act(self):
print "I'm wrapping ReturnStatement."
return ReturnStatement().act()
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
# !!! this is not working, I'd need that casting working !!!
return (MutantReturnStatement) Working().do()
rs = MutantWorking().do() #I can use MutantWorking just like Working
print "--" # just to separate output
rs.act() #this must be MutantReturnState.act(), I need the overloaded method
The expected result:
I am wrapping Working.
I am Working.
--
I'm wrapping ReturnStatement.
I'm a ReturnStatement.
Is it possible to solve the problem? I'm also curious if the problem can be solved in PHP, too. Unless I get a working solution I can't accept the answer, so please write working code to get accepted.
There is no casting as the other answers already explained. You can make subclasses or make modified new types with the extra functionality using decorators.
Here's a complete example (credit to How to make a chain of function decorators?). You do not need to modify your original classes. In my example the original class is called Working.
# decorator for logging
def logging(func):
def wrapper(*args, **kwargs):
print func.__name__, args, kwargs
res = func(*args, **kwargs)
return res
return wrapper
# this is some example class you do not want to/can not modify
class Working:
def Do(c):
print("I am working")
def pr(c,printit): # other example method
print(printit)
def bla(c): # other example method
c.pr("saybla")
# this is how to make a new class with some methods logged:
class MutantWorking(Working):
pr=logging(Working.pr)
bla=logging(Working.bla)
Do=logging(Working.Do)
h=MutantWorking()
h.bla()
h.pr("Working")
h.Do()
this will print
h.bla()
bla (<__main__.MutantWorking instance at 0xb776b78c>,) {}
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'saybla') {}
saybla
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'Working') {}
Working
Do (<__main__.MutantWorking instance at 0xb776b78c>,) {}
I am working
In addition, I would like to understand why you can not modify a class. Did you try? Because, as an alternative to making a subclass, if you feel dynamic you can almost always modify an old class in place:
Working.Do=logging(Working.Do)
ReturnStatement.Act=logging(ReturnStatement.Act)
Update: Apply logging to all methods of a class
As you now specifically asked for this. You can loop over all members and apply logging to them all. But you need to define a rule for what kind of members to modify. The example below excludes any method with __ in its name .
import types
def hasmethod(obj, name):
return hasattr(obj, name) and type(getattr(obj, name)) == types.MethodType
def loggify(theclass):
for x in filter(lambda x:"__" not in x, dir(theclass)):
if hasmethod(theclass,x):
print(x)
setattr(theclass,x,logging(getattr(theclass,x)))
return theclass
With this all you have to do to make a new logged version of a class is:
#loggify
class loggedWorker(Working): pass
Or modify an existing class in place:
loggify(Working)
There is no "casting" in Python.
Any subclass of a class is considered an instance of its parents. Desired behavior can be achieved by proper calling the superclass methods, and by overriding class attributes.
update: with the advent of static type checking, there is "type casting" - check bellow.
What you can do on your example, is to have to have a subclass initializer that receives the superclass and copies its relevant attributes - so, your MutantReturnstatement could be written thus:
class MutantReturnStatement(ReturnStatement):
def __init__(self, previous_object=None):
if previous_object:
self.attribute = previous_object.attribute
# repeat for relevant attributes
def act(self):
print "I'm wrapping ReturnStatement."
return ReturnStatement().act()
And then change your MutantWorking class to:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
return MutantReturnStatement(Working().do())
There are Pythonic ways for not having a lot of self.attr = other.attr lines on the __init__method if there are lots (like, more than 3 :-) ) attributes you want to copy -
the laziest of which wiuld be simply to copy the other instance's __dict__ attribute.
Alternatively, if you know what you are doing, you could also simply change the __class__ attribute of your target object to the desired class - but that can be misleading and carry you to subtle errors (the __init__ method of the subclass would not be called, would not work on non-python defined classes, and other possible problems), I don't recomment this approach - this is not "casting", it is use of introspection to bruteforce an object change and is only included for keeping the answer complete:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
result = Working.do(self)
result.__class__ = MutantReturnStatement
return result
Again - this should work, but don't do it - use the former method.
By the way, I am not too experienced with other OO languages, that allow casting - but is casting to a subclass even allowed in any language? Does it make sense? I think casting s only allowed to parentclasses.
update: When one works with type hinting and static analysis in the ways describd in PEP 484, sometimes the static analysis tool can't figure out what is going on. So, there is the typing.cast call: it does absolutely nothing in runtime, just return the same object that was passed to it, but the tools then "learn" that the returned object is of the passed type, and won't complain about it. It will remove typing errors in the helper tool, but I can't emphasise enough it does not have any effect in runtime:
In [18]: from typing import cast
In [19]: cast(int, 3.4)
Out[19]: 3.4
No direct way.
You may define MutantReturnStatement's init like this:
def __init__(self, retStatement):
self.retStatement = retStatement
and then use it like this:
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
# !!! this is not working, I'd need that casting working !!!
return MutantReturnStatement(Working().do())
And you should get rid from inheriting ReturnStatement in your wrapper, like this
class MutantReturnStatement(object):
def act(self):
print "I'm wrapping ReturnStatement."
return self.retStatement.act()
You don't need casting here. You just need
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
Working().do()
return MutantReturnStatement()
This will obviously give the correct return and desired printout.
What you do is not a casting, it is a type conversion. Still, you could write something like
def cast_to(mytype: Type[any], obj: any):
if isinstance(obj, mytype):
return obj
else:
return mytype(obj)
class MutantReturnStatement(ReturnStatement):
def __init__(self, *args, **kwargs):
if isinstance(args[0], Working):
pass
# your custom logic here
# for the type conversion.
Usage:
cast_to(MutantReturnStatement, Working()).act()
# or simply
MutantReturnStatement(Working()).act()
(Note that in your example MutantReturnStatement does not have .do() member function.)
i think you can defined either '__init__' or '__new__' in a class,but why all defined in django.utils.datastructures.py.
my code:
class a(object):
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
a()#print 'sss'
class b:
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
b()#print 'aaa'
datastructures.py:
class SortedDict(dict):
"""
A dictionary that keeps its keys in the order in which they're inserted.
"""
def __new__(cls, *args, **kwargs):
instance = super(SortedDict, cls).__new__(cls, *args, **kwargs)
instance.keyOrder = []
return instance
def __init__(self, data=None):
if data is None:
data = {}
super(SortedDict, self).__init__(data)
if isinstance(data, dict):
self.keyOrder = data.keys()
else:
self.keyOrder = []
for key, value in data:
if key not in self.keyOrder:
self.keyOrder.append(key)
and what circumstances the SortedDict.__init__ will be call.
thanks
You can define either or both of __new__ and __init__.
__new__ must return an object -- which can be a new one (typically that task is delegated to type.__new__), an existing one (to implement singletons, "recycle" instances from a pool, and so on), or even one that's not an instance of the class. If __new__ returns an instance of the class (new or existing), __init__ then gets called on it; if __new__ returns an object that's not an instance of the class, then __init__ is not called.
__init__ is passed a class instance as its first item (in the same state __new__ returned it, i.e., typically "empty") and must alter it as needed to make it ready for use (most often by adding attributes).
In general it's best to use __init__ for all it can do -- and __new__, if something is left that __init__ can't do, for that "extra something".
So you'll typically define both if there's something useful you can do in __init__, but not everything you want to happen when the class gets instantiated.
For example, consider a class that subclasses int but also has a foo slot -- and you want it to be instantiated with an initializer for the int and one for the .foo. As int is immutable, that part has to happen in __new__, so pedantically one could code:
>>> class x(int):
... def __new__(cls, i, foo):
... self = int.__new__(cls, i)
... return self
... def __init__(self, i, foo):
... self.foo = foo
... __slots__ = 'foo',
...
>>> a = x(23, 'bah')
>>> print a
23
>>> print a.foo
bah
>>>
In practice, for a case this simple, nobody would mind if you lost the __init__ and just moved the self.foo = foo to __new__. But if initialization is rich and complex enough to be best placed in __init__, this idea is worth keeping in mind.
__new__ and __init__ do completely different things. The method __init__ initiates a new instance of a class --- it is a constructor. __new__ is a far more subtle thing --- it can change arguments and, in fact, the class of the initiated object. For example, the following code:
class Meters(object):
def __new__(cls, value):
return int(value / 3.28083)
If you call Meters(6) you will not actually create an instance of Meters, but an instance of int. You might wonder why this is useful; it is actually crucial to metaclasses, an admittedly obscure (but powerful) feature.
You'll note that in Python 2.x, only classes inheriting from object can take advantage of __new__, as you code above shows.
The use of __new__ you showed in django seems to be an attempt to keep a sane method resolution order on SortedDict objects. I will admit, though, that it is often hard to tell why __new__ is necessary. Standard Python style suggests that it not be used unless necessary (as always, better class design is the tool you turn to first).
My only guess is that in this case, they (author of this class) want the keyOrder list to exist on the class even before SortedDict.__init__ is called.
Note that SortedDict calls super() in its __init__, this would ordinarily go to dict.__init__, which would probably call __setitem__ and the like to start adding items. SortedDict.__setitem__ expects the .keyOrder property to exist, and therein lies the problem (since .keyOrder isn't normally created until after the call to super().) It's possible this is just an issue with subclassing dict because my normal gut instinct would be to just initialize .keyOrder before the call to super().
The code in __new__ might also be used to allow SortedDict to be subclassed in a diamond inheritance structure where it is possible SortedDict.__init__ is not called before the first __setitem__ and the like are called. Django has to contend with various issues in supporting a wide range of python versions from 2.3 up; it's possible this code is completely un-neccesary in some versions and needed in others.
There is a common use for defining both __new__ and __init__: accessing class properties which may be eclipsed by their instance versions without having to do type(self) or self.__class__ (which, in the existence of metaclasses, may not even be the right thing).
For example:
class MyClass(object):
creation_counter = 0
def __new__(cls, *args, **kwargs):
cls.creation_counter += 1
return super(MyClass, cls).__new__(cls)
def __init__(self):
print "I am the %dth myclass to be created!" % self.creation_counter
Finally, __new__ can actually return an instance of a wrapper or a completely different class from what you thought you were instantiating. This is used to provide metaclass-like features without actually needing a metaclass.
In my opinion, there was no need of overriding __new__ in the example you described.
Creation of an instance and actual memory allocation happens in __new__, __init__ is called after __new__ and is meant for initialization of instance serving the job of constructor in classical OOP terms. So, if all you want to do is initialize variables, then you should go for overriding __init__.
The real role of __new__ comes into place when you are using Metaclasses. There if you want to do something like changing attributes or adding attributes, that must happen before the creation of class, you should go for overriding __new__.
Consider, a completely hypothetical case where you want to make some attributes of class private, even though they are not defined so (I'm not saying one should ever do that).
class PrivateMetaClass(type):
def __new__(metaclass, classname, bases, attrs):
private_attributes = ['name', 'age']
for private_attribute in private_attributes:
if attrs.get(private_attribute):
attrs['_' + private_attribute] = attrs[private_attribute]
attrs.pop(private_attribute)
return super(PrivateMetaClass, metaclass).__new__(metaclass, classname, bases, attrs)
class Person(object):
__metaclass__ = PrivateMetaClass
name = 'Someone'
age = 19
person = Person()
>>> hasattr(person, 'name')
False
>>> person._name
'Someone'
Again, It's just for instructional purposes I'm not suggesting one should do anything like this.