I have a set of unit tests which will repeatedly be using a certain cooperator class Rental that I want to mock, with the same arguments passed every time. To make this easier, I want to make a subclass of mock.Mock and pass the arguments on creation. Here's the code:
class RentalMock(Mock):
def __call__(self, *args, **kwargs):
super(RentalMock, self).__call__(*args, spec_set=Rental, **kwargs)
self.get_points.return_value=0
return self
The problem is, when I instantiate this class, that override has no visible effect. And trying to override it here also doesn't work.
> a = RentalMock()
> a.get_points()
<RentalMock name='mock.get_points' id='4447663888'>
> a.get_points.return_value = 0
> a.get_points()
<RentalMock name='mock.get_points' id='4447663888'>
> a.configure_mock(**{"get_points.return_value":0})
> a.get_points()
<RentalMock name='mock.get_points' id='4447663888'>
I'm thoroughly confused. I've tried three methods, all taken directly from the docs, and none seem to work. When I pass these arguments directly to an instance of Mock, they work fine. What am I missing?
You are overriding __call__ when it looks like you want to override __init__. Subclassing can often get involved, especially with something as already intricate as Mock. Perhaps a factory function would be simpler.
Related
I want to write a mock for a library object without inheriting from it in order to properly test, however without having to stub all non used functions of the original object.
To be specific I want to write a ContextMock for the invoke library.
class ContextMock:
...
The main problem here is that I therefor call a #task function which then calls my code that I want to test. However the #task decorator checks whether the context object is an instance of a Context, like this:
def __call__(self, *args, **kwargs):
# Guard against calling tasks with no context.
if not isinstance(args[0], Context):
err = "Task expected a Context as its first arg, got {} instead!"
# TODO: raise a custom subclass _of_ TypeError instead
raise TypeError(err.format(type(args[0])))
Therefor my question is, can I somehow change the isinstance function of my ContextMock, or make it look like its an instance of Context without inheriting its attributes?
Or would it be possible to somehow mock the isinstance function?
How does the default implementation of instancecheck work? Is there perhabs a baseclass attribute that can be overwritten?
I already tried to provide a custom metaclass with an custom instancecheck function, which of course does not work as the instancecheck of the Context is called, right?
Also Note that I'm well aware that any hacky solution should not belong in production code, and is only used for testing.
Edit:
To add a generic example of what I want to archive:
class Context:
pass
class ContextMock:
pass
mock = ContextMock
... do magic with mock
assert isinstance(mock, Context)
So I would like to update a class that is in a library, but I have no control over that class (I can't touch the original source code). Constraint number 2: other users have already inherited this parent class, and asking them to inherit from a third class would be a bit "annoying". So I have to work with both constraints at the same time: needing to extend the parent class, but not by inheritng it.
One solution seemed to make more sense at first, although it's bordering on "monkey-patching". Overriding some methods of the parent class by my own. I wrote a little decorator that could do that. But I met with some error, and rather than giving you the ENTIRE code, here is an example. Consider that the following class, named Old here (the parent class), is in a library I can't touch (regarding its source code, anyway):
class Old(object):
def __init__(self, value):
self.value = value
def at_disp(self, other):
print "Value is", self.value, other
return self.value
That's a simple class, a constructor and a method with a parameter (to test a bit more). Nothing really hard so far. But here comes my decorator to extend a method of this class:
def override_hook(typeclass, method_name):
hook = getattr(typeclass, method_name)
def wrapper(method):
def overriden_hook(*args, **kwargs):
print "Before the hook is called."
kwargs["hook"] = hook
ret = method(*args, **kwargs)
print "After the hook"
return ret
setattr(typeclass, method_name, overriden_hook)
return overriden_hook
return wrapper
#override_hook(Old, "at_disp")
def new_disp(self, other, hook):
print "In the new hook, before"
ret = hook(self, other)
print "In the new hook, after"
return ret
Surprisingly, this works perfectly. If you create an Old instance, and call its at_disp method, the new method is called (and call the old one). Much like hidden inheritance. But here is the real challenge:
We'll try to have a class inheriting from Old. That's what users have done. My "patch" should apply to them too, without needing for them to do anything:
class New(Old):
def at_disp(self, other):
print "That's in New..."
return super(Old, self).at_disp(self, other)
If you create a New object, and try its at_disp method... it crashes. super() cannot find at_disp in Old. Which is odd, because New directly inherits from Old. My guess is that, since my new, replaced method is unbound, super() doesn't find it properly. If you replace super() by a direct call to Old.at_disp(), everything works.
Does somebody know how to fix this issue? And why it happens?
Thanks very much!
Two problems.
First, the call to super should be super(New, self), not super(Old, self). The first argument to super is generally the "current" class (i.e., the class whose method is calling super).
Second, the call to the at_disp method should just be at_disp(other), not at_disp(self, other). When you use the two-argument form of super, you get a bound super object that acts like an instance, so self will automatically be passed if you call a method on it.
So the call should be super(New, self).at_disp(other). Then it works.
Using Django I am in the habit of overriding methods in generic views using super:
class MyClass(SomeGenericView):
def method_to_override(self, request, *args, **kwargs):
# do something extra here
return super(MyClass, self).method_to_override(request, *args, **kwargs)
I notice that pydev autocompletes calling the method from the parent class instead:
class MyClass(SomeGenericView):
def method_to_override(self, request, *args, **kwargs):
# do something extra here
return SomeGenericView.method_to_override(self, request, *args, **kwargs)
Is there any difference between these approaches? Is any one preferred for any reason?
Unless you're dealing with old-style classes (which is not the case here), using super() is the RightThing(tm) to do, as super() will properly take care of calling the right "next" method according to inheritance graph - remember, Python does support multiple inheritance (which FWIW is widely used in Django's class-based views).
If I'm not mistaken the second example would cause an infinite loop, since it's a recursive call without any stop criteria.
I think it should replace with the name of the parent class, but even then there are going to be some differences when you subclass MyClass.
In the first example, the subclass will call the method in MyClass, which is the parent, while in the second it will call the method in SomeGenericView, since it's hardcoded.
I am writing a class with multiple constructors using #classmethod. Now I would like both the __init__ constructor as well as the classmethod constructor call some routine of the class to set initial values before doing other stuff.
From __init__ this is usually done with self:
def __init__(self, name="", revision=None):
self._init_attributes()
def _init_attributes(self):
self.test = "hello"
From a classmethod constructor, I would call another classmethod instead, because the instance (i.e. self) is not created until I leave the classmethod with return cls(...). Now, I can call my _init_attributes() method as
#classmethod
def from_file(cls, filename=None)
cls._init_attributes()
# do other stuff like reading from file
return cls()
and this actually works (in the sense that I don't get an error and I can actually see the test attribute after executing c = Class.from_file(). However, if I understand things correctly, then this will set the attributes on the class level, not on the instance level. Hence, if I initialize an attribute with a mutable object (e.g. a list), then all instances of this class would use the same list, rather than their own instance list. Is this correct? If so, is there a way to initialize "instance" attributes in classmethods, or do I have to write the code in such a way that all the attribute initialisation is done in init?
Hmmm. Actually, while writing this: I may even have greater trouble than I thought because init will be called upon return from the classmethod, won't it? So what would be a proper way to deal with this situation?
Note: Article [1] discusses a somewhat similar problem.
Yes, you'r understanding things correctly: cls._init_attributes() will set class attributes, not instance attributes.
Meanwhile, it's up to your alternate constructor to construct and return an instance. In between constructing it and returning it, that's when you can call _init_attributes(). In other words:
#classmethod
def from_file(cls, filename=None)
obj = cls()
obj._init_attributes()
# do other stuff like reading from file
return obj
However, you're right that the only obvious way to construct and return an instance is to just call cls(), which will call __init__.
But this is easy to get around: just have the alternate constructors pass some extra argument to __init__ meaning "skip the usual initialization, I'm going to do it later". For example:
def __init__(self, name="", revision=None, _skip_default_init=False):
# blah blah
#classmethod
def from_file(cls, filename=""):
# blah blah setup
obj = cls(_skip_default_init=True)
# extra initialization work
return obj
If you want to make this less visible, you can always take **kwargs and check it inside the method body… but remember, this is Python; you can't prevent people from doing stupid things, all you can do is make it obvious that they're stupid. And the _skip_default_init should be more than enough to handle that.
If you really want to, you can override __new__ as well. Constructing an object doesn't call __init__ unless __new__ returns an instance of cls or some subclass thereof. So, you can give __new__ a flag that tells it to skip over __init__ by munging obj.__class__, then restore the __class__ yourself. This is really hacky, but could conceivably be useful.
A much cleaner solution—but for some reason even less common in Python—is to borrow the "class cluster" idea from Smalltalk/ObjC: Create a private subclass that has a different __init__ that doesn't super (or intentionally skips over its immediate base and supers from there), and then have your alternate constructor in the base class just return an instance of that subclass.
Alternatively, if the only reason you don't want to call __init__ is so you can do the exact same thing __init__ would have done… why? DRY stands for "don't repeat yourself", not "bend over backward to find ways to force yourself to repeat yourself", right?
Like in this question, except I want to be able to have querysets that return a mixed body of objects:
>>> Product.objects.all()
[<SimpleProduct: ...>, <OtherProduct: ...>, <BlueProduct: ...>, ...]
I figured out that I can't just set Product.Meta.abstract to true or otherwise just OR together querysets of differing objects. Fine, but these are all subclasses of a common class, so if I leave their superclass as non-abstract I should be happy, so long as I can get its manager to return objects of the proper class. The query code in django does its thing, and just makes calls to Product(). Sounds easy enough, except it blows up when I override Product.__new__, I'm guessing because of the __metaclass__ in Model... Here's non-django code that behaves pretty much how I want it:
class Top(object):
_counter = 0
def __init__(self, arg):
Top._counter += 1
print "Top#__init__(%s) called %d times" % (arg, Top._counter)
class A(Top):
def __new__(cls, *args, **kwargs):
if cls is A and len(args) > 0:
if args[0] is B.fav:
return B(*args, **kwargs)
elif args[0] is C.fav:
return C(*args, **kwargs)
else:
print "PRETENDING TO BE ABSTRACT"
return None # or raise?
else:
return super(A).__new__(cls, *args, **kwargs)
class B(A):
fav = 1
class C(A):
fav = 2
A(0) # => None
A(1) # => <B object>
A(2) # => <C object>
But that fails if I inherit from django.db.models.Model instead of object:
File "/home/martin/beehive/apps/hello_world/models.py", line 50, in <module>
A(0)
TypeError: unbound method __new__() must be called with A instance as first argument (got ModelBase instance instead)
Which is a notably crappy backtrace; I can't step into the frame of my __new__ code in the debugger, either. I have variously tried super(A, cls), Top, super(A, A), and all of the above in combination with passing cls in as the first argument to __new__, all to no avail. Why is this kicking me so hard? Do I have to figure out django's metaclasses to be able to fix this or is there a better way to accomplish my ends?
Basically what you're trying to do is to return the different child classes, while querying a shared base class. That is: you want the leaf classes. Check this snippet for a solution: http://www.djangosnippets.org/snippets/1034/
Also be sure to check out the docs on Django's Contenttypes framework: http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/ It can be a bit confusing at first, but Contenttypes will solve additional problems you'll probably face when using non-abstract base classes with Django's ORM.
You want one of these:
http://code.google.com/p/django-polymorphic-models/
https://github.com/bconstantin/django_polymorphic
There are downsides, namely extra queries.
Okay, this works: https://gist.github.com/348872
The tricky bit was this.
class A(Top):
pass
def newA(cls, *args, **kwargs):
# [all that code you wrote for A.__new__]
A.__new__ = staticmethod(newA)
Now, there's something about how Python binds __new__ that I maybe don't quite understand, but the gist of it is this: django's ModelBase metaclass creates a new class object, rather than using the one that's passed in to its __new__; call that A_prime. Then it sticks all the attributes you had in the class definition for A on to A_prime, but __new__ doesn't get re-bound correctly.
Then when you evaluate A(1), A is actually A_prime here, python calls <A.__new__>(A_prime, 1), which doesn't match up, and it explodes.
So the solution is to define your __new__ after A_prime has been defined.
Maybe this is a bug in django.db.models.base.ModelBase.add_to_class, maybe it's a bug in Python, I don't know.
Now, when I said "this works" earlier, I meant this works in isolation with the minimal object construction test case in the current SVN version of Django. I don't know if it actually works as a Model or is useful in a QuerySet. If you actually use this in production code, I will make a public lightning talk out of it for pdxpython and have them mock you until you buy us all gluten-free pizza.
Simply stick #staticmethod before the __new__ method.
#staticmethod
def __new__(cls, *args, **kwargs):
print args, kwargs
return super(License, cls).__new__(cls, *args, **kwargs)
Another approach that I recently found: http://jeffelmore.org/2010/11/11/automatic-downcasting-of-inherited-models-in-django/