What is the difference between mocking and monkey patching? - python

I work with python and I'm a bit new to testing. I often see tests replacing an external dependency with a local method like so:
import some_module
def get_file_data():
return "here is the pretend file data"
some_module.get_file_data = get_file_data
# proceed to test
I see this referred to as "monkey patching" as in the question. I also see the word "mock" being used a lot alongside "monkey patching" or in what seem to be very similar scenarios.
Is there any difference between the two concepts?

Monkey patching is replacing a function/method/class by another at runtime, for testing purpses, fixing a bug or otherwise changing behaviour.
The unittest.mock library makes use of monkey patching to replace part of your software under test by mock objects. It provides functionality for writing clever unittests, such as:
It keeps record of how mock objects are being called, so you can test
the calling behaviour of your code with asserts.
A handy decorator patch() for the actual monkey patching.
You can make mock objects to return specific values (return_value), raise specific exceptions (side_effect).
Mocking of 'magic methds' (e.g. __str__).
You can use mocking, for example, to replace network I/O (urllib, requests) in a client, so unittests work without depending on an external server.

Related

Using Fixtures vs passing method as argument

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?
One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.
Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Monkeypatch persisting across unit tests python

I have a custom framework which runs different code for different clients. I have monkeypatched certain methods in order to customize functionality for a client.
Here is the pattern simplified:
#import monkeypatches here
if self.config['client'] == 'cool_dudes':
from app.monkeypatches import Stuff
if self.config['client'] == 'cool_dudettes':
from app.monkeypatches import OtherStuff
Here is an example patch:
from app.framework.stuff import Stuff
def function_override(self):
return pass
Stuff.function = function_override
This works fine when the program executes as it is executed in a batch manner, spinning up from scratch every time. However, when running across unit tests, I find that the monkey patches persist across tests, causing unexpected behavior.
I realize that it would be far better to use an object oriented inheritance approach to these overrides, but I inherited this codebase and am not currently empowered to rearchitect it to that degree.
Barring properly re-architecting the program, how can I prevent these monkey patches from persisting across unit tests?
The modules, including app.framework.<whatever>, are not reloaded for every test. So, any changes in them you make persist. The same happens if your module is stateful (that's one of the reasons why global state is not such a good idea, you should rather keep state in objects).
Your options are to:
undo the monkey-patches when needed, or
change them into something more generic that would change (semi-)automatically depending on the test running, or
(preferred) Do not reinvent the wheel and use an existing, manageable, time-proven solution for your task (or at least, base your work on one if it doesn't meet your requirements completely). E.g. if you use them for mocking, see How can one mock/stub python module like urllib . Among the suggestions there is #mock.patch that does the patching for a specific test and undoes it upon its completion.
Anyone coming here looking for information about monkeypatching, might want to have a look at pytest's monkeypatch fixture. It avoids the problem of the OP by automatically undoing all modifications after the test function has finished.

Is it okay to use python mock for production?

A lot of my mock usage is in unit tests, but I am not sure if I can use the mock library for production, consider the following trivial example to get data from external source.
class Receiver(object):
def get_data(self):
return _call_api(...)
Now, can I use mock library to change the get_data() function for re-run purpose on production?
with patch('Receiver.get_data') as mock_get_data:
mock_get_data.return_values = [1, 2]
...
Some might suggest to write another Rerun receiver as a better approach, while I don't disagree but I am still raising this question for the sake of curiosity.
My questions include:
If no, what's the reason?
If yes, any caveats?
I would agree that for production use, a Receiver subclass that has an overridden get_data method would be much better.
The reason is simple -- if each type of receiver only receives data from a single source then your code will be much easier to read and maintain. If the same Reciever will end up returning data from multiple sources, then the code will be confusing and you'll end up needing to hunt down whether you were fetching data from one place of whether it's data that you explicitly set via mock, etc.
No. If a function is supposed to behave a certain way in production, then code it to behave that way. If you need fallback or retry behavior, mock is not the right way to do that.

Python Testing - Reset all mocks?

When doing unit-testing with Python / PyTest, if you do you not have patch decorators or with patch blocks throughout your code, is there a way to reset all mocks at the end of every file / module to avoid inter-file test pollution?
It seems like something that is mocked in one Python test file remains mocked in other file with the same return value, which means my mocks are persisting between tests and files (when a patch decorator or with patch block is NOT used).
Is there any way around this other than patching? There wouldn't happen to be a mock.reset_all_mocks() or something like that, would there?
What I ended up doing was using the pytest-mock library. According to the Readme:
This plugin installs a mocker fixture which is a thin-wrapper around
the patching API provided by the excellent mock package, but with the
benefit of not having to worry about undoing patches at the end of a
test. (Emphasis added.)
So now I can do: mocker.patch.object(module, 'method', return_value='hi'), and the patch will be removed at the end of the test.
There is no need to use with any more so that this solution scales nicely if you have many mocks in one test or if you want to change mocks during the test.
After monkey-patching, I'm undoing it at the end of the test to avoid any leaking to other tests or limit the patching within the scope.
def test1(monkeypatch):
monkeypatch.setattr(...)
assert(...)
monkeypatch.undo()
why don't use monkeypatch ?
The monkeypatch function argument helps you to safely set/delete an attribute, dictionary item or environment variable or to modify sys.path for importing.
you can:
def test1(monkeypatch):
monkeypatch.setattr(.....

Is having a unit test that is mostly mock verification a smell?

I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.

Categories