Python Testing - Reset all mocks? - python

When doing unit-testing with Python / PyTest, if you do you not have patch decorators or with patch blocks throughout your code, is there a way to reset all mocks at the end of every file / module to avoid inter-file test pollution?
It seems like something that is mocked in one Python test file remains mocked in other file with the same return value, which means my mocks are persisting between tests and files (when a patch decorator or with patch block is NOT used).
Is there any way around this other than patching? There wouldn't happen to be a mock.reset_all_mocks() or something like that, would there?

What I ended up doing was using the pytest-mock library. According to the Readme:
This plugin installs a mocker fixture which is a thin-wrapper around
the patching API provided by the excellent mock package, but with the
benefit of not having to worry about undoing patches at the end of a
test. (Emphasis added.)
So now I can do: mocker.patch.object(module, 'method', return_value='hi'), and the patch will be removed at the end of the test.
There is no need to use with any more so that this solution scales nicely if you have many mocks in one test or if you want to change mocks during the test.

After monkey-patching, I'm undoing it at the end of the test to avoid any leaking to other tests or limit the patching within the scope.
def test1(monkeypatch):
monkeypatch.setattr(...)
assert(...)
monkeypatch.undo()

why don't use monkeypatch ?
The monkeypatch function argument helps you to safely set/delete an attribute, dictionary item or environment variable or to modify sys.path for importing.
you can:
def test1(monkeypatch):
monkeypatch.setattr(.....

Related

How to run a scoped function before all pytest fixtures of that scope?

I saw this question, asking the same about doing things before tests.
I need to do things before fixtures.
For example, I am setting up a dockerized environment, which I have to clean before building. To make things more complicated, I am using this plugin which defines fixtures I can't control or change, and I need something that comes before all fixtures (and does docker-compose down and other cleanup).
For example, when pytest starts, run the common per-session pre-step, then the fixtures, then the per-module pre-step, then the fixtures, and so on.
Is this a supported hook in pytest?
I couldn't find a relevant doc.
As #MrBean Bremen stated, pytest_sessionstart does the trick.
An annoying problem with that, is I can't use fixtures inside (naturally), with Argument(s) {'fixture_name'} are declared in the hookimpl but can not be found in the hookspec
I tried to fix that with pytest-dependency, but it doesn't seem to work on fixtures, only on tests.
I am left with the hacky workaround to extract required data for sessionstart to functions, and call them in sessionstart as well as in similarly named fixtures.
It works, but it is ugly.
Would accept a cleaner solution over this one

Using Fixtures vs passing method as argument

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?
One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.
Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Monkeypatch persisting across unit tests python

I have a custom framework which runs different code for different clients. I have monkeypatched certain methods in order to customize functionality for a client.
Here is the pattern simplified:
#import monkeypatches here
if self.config['client'] == 'cool_dudes':
from app.monkeypatches import Stuff
if self.config['client'] == 'cool_dudettes':
from app.monkeypatches import OtherStuff
Here is an example patch:
from app.framework.stuff import Stuff
def function_override(self):
return pass
Stuff.function = function_override
This works fine when the program executes as it is executed in a batch manner, spinning up from scratch every time. However, when running across unit tests, I find that the monkey patches persist across tests, causing unexpected behavior.
I realize that it would be far better to use an object oriented inheritance approach to these overrides, but I inherited this codebase and am not currently empowered to rearchitect it to that degree.
Barring properly re-architecting the program, how can I prevent these monkey patches from persisting across unit tests?
The modules, including app.framework.<whatever>, are not reloaded for every test. So, any changes in them you make persist. The same happens if your module is stateful (that's one of the reasons why global state is not such a good idea, you should rather keep state in objects).
Your options are to:
undo the monkey-patches when needed, or
change them into something more generic that would change (semi-)automatically depending on the test running, or
(preferred) Do not reinvent the wheel and use an existing, manageable, time-proven solution for your task (or at least, base your work on one if it doesn't meet your requirements completely). E.g. if you use them for mocking, see How can one mock/stub python module like urllib . Among the suggestions there is #mock.patch that does the patching for a specific test and undoes it upon its completion.
Anyone coming here looking for information about monkeypatching, might want to have a look at pytest's monkeypatch fixture. It avoids the problem of the OP by automatically undoing all modifications after the test function has finished.

What is the difference between mocking and monkey patching?

I work with python and I'm a bit new to testing. I often see tests replacing an external dependency with a local method like so:
import some_module
def get_file_data():
return "here is the pretend file data"
some_module.get_file_data = get_file_data
# proceed to test
I see this referred to as "monkey patching" as in the question. I also see the word "mock" being used a lot alongside "monkey patching" or in what seem to be very similar scenarios.
Is there any difference between the two concepts?
Monkey patching is replacing a function/method/class by another at runtime, for testing purpses, fixing a bug or otherwise changing behaviour.
The unittest.mock library makes use of monkey patching to replace part of your software under test by mock objects. It provides functionality for writing clever unittests, such as:
It keeps record of how mock objects are being called, so you can test
the calling behaviour of your code with asserts.
A handy decorator patch() for the actual monkey patching.
You can make mock objects to return specific values (return_value), raise specific exceptions (side_effect).
Mocking of 'magic methds' (e.g. __str__).
You can use mocking, for example, to replace network I/O (urllib, requests) in a client, so unittests work without depending on an external server.

Non-test methods in a Python TestCase

Ok, as Google search isn't helping me in a while (even when using the correct keywords).
I have a class extending from TestCase in which I want to have some auxiliary methods that are not going to be executed as part of the test, they'll be used to generate some mocked objects, etc, auxiliary things for almost any test.
I know I could use the #skip decorator so unittest doesn't run a particular test method, but I think that's an ugly hack to use for my purpose, any tips?
Thanks in advance, community :D
I believe that you don't have to do anything. Your helper methods should just not start with test_.
The only methods that unittest will execute [1] are setUp, anything that starts with test, and tearDown [2], in that order. You can make helper methods and call them anything except for those three things, and they will not be executed by unittest.
You can think of setUp as __init__: if you're generating mock objects that are used by multiple tests, create them in setUp.
def setUp(self):
self.mock_obj = MockObj()
[1]: This is not entirely true, but these are the main 3 groups of methods that you can concentrate on when writing tests.
[2]: For legacy reasons, unittest will execute both test_foo and testFoo, but test_foo is the preferred style these days. setUp and tearDown should appear as such.
The test runner will only directly execute methods beginning with test, so just make sure your helper methods' names don't begin with test.

Categories