The Mock documentation describes a simple and elegant way of applying patches to all of the tests method inside a TestCase:
#patch('foo.bar')
#patch('foo.baz')
#patch('foo.quux')
#patch('foo.narf')
class FooTest(TestCase):
def test_foo(self, bar, baz, quux, narf):
""" foo """
self.assertTrue(False)
However, one issue I've encountered with this method is that if I'd like to call stop() on one of the patches inside one of the test methods, there doesn't appear to be anyway of getting a reference to the patcher object -- the only thing that is passed into the method is the mock objects, in this case bar, baz, quux, narf.
The only way I've found to solve this problem is to move to the pattern described in the Mock docs where the patchers are instantiated and started inside the setUp method of the TestCase and stopped inside the tearDown method. This fits my purpose, but adds a lot of extra boilerplate and isn't as elegant as the class decorator approach.
Is there another way to solve this problem?
1
Say you want to temporarily restore foo.narf in a method. foo.narf is, in the context of the decorated function, a MagicMock object. This object has a _mock_wraps attribute which will be invoked when the mock is called! So at the top of your module, _narf = foo.narf, and in your test case, foo.narf._mock_wraps = _narf.
The catch is that this will only pass through to the real function, not actually swap it back, which means that some test cases will fail (e.g. if they rely on the function object actually being "itself"). And if your mock has other attributes, that could interfere (I haven't tested much) because the passthrough call to _mock_wraps() comes at the bottom of a method that first considers the other properties of the mock.
2
The patch() decorator involves each patcher (separate copies per method) being added to a list called patchings which is a field of the method itself. I.e. you can access this list as self.test_foo.patchings, and go through to find the one you want.
However, start() and stop() are not actually called when you use patch() as a decorator, and the behavior gets tricky once you start reaching in and changing it. So I wrote this context manager.
class unpatch:
def __init__(self, name, method):
compare = patch(name)
self.patcher = next((
p for p in method.patchings
if p.target == compare.getter()
and p.attribute == compare.attribute
), None)
if self.patcher is None:
raise ValueError(name)
def __enter__(self):
self.patcher.__exit__()
def __exit__(self, *exc_info):
self.patcher.__enter__()
Inside your test case, you use it like this:
with unpatch('foo.narf', self.test_foo):
foo.narf()
Disclaimer: this is hacks.
Related
I'm having an implementation class where, there's this save method which is being called in multiple places within the class.
So basically that method intakes an argument and returns a file url which is a string.
In the class I'm trying to test, I'm saving multiple files in different locations. Hence how can I test that in my UnitTest class?
For eg I was able to mock the delete method like below, which is being called only once:
#patch.object(FileStoreSp, "delete_file", return_value=True)
But for the save method I'm not sure how can i test it since its being called in multipe places and it returns different values. Is there a way I can pass the return values in some sort of an order in which the method is being called?
Any help could be appreciated.
You could monkey patch the save method. You could create a temp directory and test that everything is in place after your function has run.
However, the scenario, which you describe, indicates that you probably should refactor your code to be more testable. Writing files is a so called "side-effect". Side-effects make your test harder (maybe impossible) to test. Try to avoid side-effects, if possible. And if they are really needed, then try to concentrate side effects in one place at the boundary of your system. There are many strategies to archive this. For example:
Rearrange function calls
Delegate the execution of the side effect. E.g. let the function return a value of what should be done (return "write-file", "Filename") and handle those at the top level
If you really cannot change the code (maybe its 3rd party code out of your control), then you can monkey patch nearly everything in python. How to do it best depends on your concrete scenario and code. For the unittest framework have a look at MagicMock.
If I understand correctly, you have some method on your class and you want to test that method. And that method calls another method (save) more than once. Now you want to mock out the save method, while testing that other method, which is the correct approach.
Let's abstract this for a moment. Say the method you are testing is called bar and inside it calls the method foo twice. Now foo does all sorts of stuff including side effects (disk I/O, whatever), so you obviously want to mock it during the bar test. Yet you want to ensure that foo is called in the way you expect it from bar and also that bar does something specific with the return values it gets from foo.
Thankfully, the Mock class allows you to set the side_effect attribute in various ways. One of them is setting it to an iterable. Calling the mock once then returns the next element from that iterable. This allows you to set multiple distinct return values for the mocked object in advance.
We can then leverage the assert_has_calls method of the mocked object using call objects to verify that foo was called with the expected arguments.
Here is an example to illustrate the concept:
from unittest import TestCase
from unittest.mock import MagicMock, call, patch
class MyClass:
def foo(self, string: str) -> list[str]:
print("Some side effect")
return string.split()
def bar(self, string1: str, string2: str) -> tuple[str, str]:
x = self.foo(string1)[0]
y = self.foo(string2)[0]
return x, y
class MyTestCase(TestCase):
#patch.object(MyClass, "foo")
def test_bar(self, mock_foo: MagicMock) -> None:
# Have mocked `foo` return ["a"] first, then ["b"]
mock_foo.side_effect = ["a"], ["b"]
# Thus, we expect `bar` to return ("a", "b")
expected_bar_output = "a", "b"
obj = MyClass()
# The arguments for `bar` are not important here,
# they just need to be unique to ensure correct calls of `foo`:
arg1, arg2 = MagicMock(), MagicMock()
output = obj.bar(arg1, arg2)
# Ensure the output is as expected:
self.assertTupleEqual(expected_bar_output, output)
# Ensure `foo` was called as expected:
mock_foo.assert_has_calls([call(arg1), call(arg2)])
Hope this helps.
I am trying to unit test a block of code, and I'm running into issues with mocking the object's type to grab the right function from a dictionary.
For example:
my_func_dict = {
Foo: foo_func,
Bar: bar_func
FooBar: foobar_func
}
def generic_type_func(my_obj):
my_func = my_func_dict[type(my_obj)]
my_func()
With this code, I can swap between functions with a key lookup, and it's pretty efficient.
When I try to mock my_obj like this, I get a KeyError:
mock_obj = Mock(spec=Foo)
generic_type_func(mock_obj)
# OUTPUT:
# KeyError: <class 'unittest.mock.Mock'>
Because it's a mock type. Although, when I check isinstance(), it returns true:
is_instance_Foo = isinstance(mock_obj, Foo)
print(is_instance_foo)
# Output:
# True
Is there any way to retain the type() check, and using the dictionary lookup via a key, while still maintaining the ability to mock the input and return_type? Or perhaps a different pattern where I can retain the performance of a dictionary, but use isinstance() instead so I can mock the parameter? Looping over a list to check the type against every possible value is not preferred.
I managed to unit test this by moving the function to the parameter itself, and implicitly calling the function from the parent. I wanted to avoid this, because now the function manipulates the parent implicitly instead of explicitly from the parent itself. It looks like this now:
def generic_type_func(self, my_obj):
my_obj.my_func(self)
The function then modifies self as needed, but implicitly instead of an explicit function on the parent class.
This:
def my_func(self, parent):
self.foo_prop = parent
Rather than:
def my_foo_func(self, foo):
foo.foo_prop = self
This works fine with a mock, and I can mock that function easily. I've just hidden some of the functionality, and edit properties on the parent implicitly instead of explicitly from within the class I'm working in. Maybe this is preferable anyways, and it looks cleaner with less code on the parent class. Every instance must have my_func this way, which is enforced via an abstract base class.
I'm using the datashape Python package and registering a new type with the #datashape.discover.register decorator. I'd like to test that when I call datashape.discover on an object of the type I'm registering, it calls the function being decorated. I'd also like to do this with good unit testing principles, meaning not actually executing the function being decorated, as it would have side effects I don't want in the test. However, this isn't working.
Here's some sample code to demonstrate the problem:
myfile.py:
#datashape.discover.register(SomeType)
def discover_some_type(data)
...some stuff i don't want done in a unit test...
test_myfile.py:
class TestDiscoverSomeType(unittest.TestCase):
#patch('myfile.discover_some_type')
def test_discover_some_type(self, mock_discover_some_type):
file_to_discover = SomeType()
datashape.discover(file_to_discover)
mock_discover_some_type.assert_called_with(file_to_discover)
The issue seems to be that the function I want mocked is mocked in the body of the test, however, it was not mocked when it was decorated (i.e. when it was imported). The discover.register function essentially internally registers the function being decorated to look it up when discover() is called with an argument of the given type. Unfortunately, it seems to internally register the real function every time, and not the patched version I want, so it will always call the real function.
Any thoughts on how to be able to patch the function being decorated and assert that it is called when datashape.discover is called?
Here's a solution I've found that's only a little hacky:
sometype.py:
def discover_some_type(data):
...some stuff i don't want done in a unit test...
discovery_channel.py:
import sometype
#datashape.discover.register(SomeType)
def discover_some_type(data):
return sometype.discover_some_type(data)
test_sometype.py:
class TestDiscoverSomeType(unittest.TestCase):
#patch('sometype.discover_some_type')
def test_discover_some_type(self, mock_discover_some_type):
import discovery_channel
file_to_discover = SomeType()
datashape.discover(file_to_discover)
mock_discover_some_type.assert_called_with(file_to_discover)
The key is that you have to patch out whatever will actually do stuff before you import the module that has the decorated function that will register the patched function to datashape. This unfortunately means that you can't have your decorated function and the function doing the discovery in the same module (so things that should logically go together are now apart). And you have the somewhat hacky import-in-a-function in your unit test (to trigger the discover.register). But at least it works.
I am trying to use a nose_parameterized test and want to use it for a unittest method.
from nose.tools import assert_equal
from nose_parameterized import parameterized
import unittest
Class TestFoo(unittest.TestCase):
def setUp(self):
self.user1 = "Bar"
self.user2 = "Foo"
#parameterized.expand([
("testuser1",self.user1,"Bar"),
("testuser2",self.user2,"Foo")
]
def test_param(self,name,input,expected):
assert_equal(input,expected)
But self is not defined in the decorator function. Is there a workaround for this? I know that I can use global class variables but I need to use variables in setUp.
One workaround would be to use a string containing the attribute name in the decorator, and getattr in the test function:
#parameterized.expand([
("testuser1", "user1", "Bar"),
("testuser2", "user2", "Foo")
])
def test_param(self, name, input, expected):
assert_equal(getattr(self, input), expected)
With this approach, test_param assumes that the value of its input argument is the attribute name whose value should be checked against expected.
The decorator is not run when you seem to assume it will be run. In the following example:
class spam:
#eggs
def beans( self ):
pass
remember that the use of the decorator is the same as saying:
beans = eggs( beans )
inside the spam scope, immediately after the def statement itself is executed. When is a def statement executed? At the time the class and its methods are defined. The decorator modifies the class-level definition of the method spam.beans, not the instance-level value of self.beans. And of course, this occurs long before any instances of that class are ever created, i.e. before a reference to any one particular instance, self, is ever meaningful.
If you want to attach a particular callable (e.g. a modified test_param callable that has certain arguments pre-baked into it using functools.partial) to an instance self, you can of course do so inside one of the instance methods (e.g. __init__ or setUp).
Some people will describe the class-definition code as happening at "parse time" and instance-level code as happening at "run time". You may or may not find that a helpful way of thinking about it, although really almost everything is "run-time" in Python.
I've been using the mock library to do some of my testing. It's been great so far, but there are some things that I haven't completely understand yet.
mock provides a nice way of patching an entire method using patch, and I could access the patched object in a method like so:
#patch('package.module')
def test_foo(self, patched_obj):
# ... call patched_obj here
self.assertTrue(patched_obj.called)
My question is, how do I access a patched object, if I use the patch decorator on an entire class?
For example:
#patch('package.module')
class TestPackage(unittest.TestCase):
def test_foo(self):
# how to access the patched object?
In this case, test_foo will have an extra argument, the same way as when you decorate the method. If your method is also patched, it those args will be added as well:
#patch.object(os, 'listdir')
class TestPackage(unittest.TestCase):
#patch.object(sys, 'exit')
def test_foo(self, sys_exit, os_listdir):
os_listdir.return_value = ['file1', 'file2']
# ... Test logic
sys_exit.assert_called_with(1)
The arguments order is determined by the order of the decorators calls. The method decorator is called first, so it appends the first argument. The class decorator is outer, so it will add a second argument. The same applies when you attach several patch decorators to the same test method or class (i.e. the outer decorator goes last).