Using Fixtures vs passing method as argument - python

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?

One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.

Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Related

Many pytest fixtures vs. one large "container" fixture

We have a large python project which is tested using pytest, currently with unittest style classes, and we started migrating it to module-based, function style tests.
We are having a debate whether we should:
Split our large test base-class to many small, independent pytest fixtures; or
Maintain a one large fixture which lazily imports all other fixtures.
Pros for many fixtures:
Modular and probably easy to maintain
Each test only uses what it needs
Pros for one large fixture:
Less boilerplate code, each test only has one extra keyword arg
What should we do? Any opinions are welcome as long as they are explained. Thanks :)
The use of specific fixtures has a lot of advantages over the big one. Thanks to this pytest gained its popularity.
Different fixtures can reproduce various mutually exclusive states of the system under test. This is useful when you want to test various cases of your system behavior. Single fixture does not give such flexibility.
Pytest allows you to flexibly assemble a call to a fixture, when one fixture uses the results of the execution of another. Decomposition is an effective programming pattern and tests are no exception.
Fixtures in pytest can be parameterized, this is a very useful functionality, but its application will be impossible if you make one big fixture for all tests.
Conftest.py is a directory specific in pytest. So fixtures in pytest can be global (located in conftest), local (located inside the test module) and intermediate (located in conftest at the package level). This allows you to reuse common code, while not losing flexibility in specific cases.
Fixtures have scope (function, class, module, session), which gives additional flexibility.
The root idea of ​​the pytest framework is the use of fixtures at those levels where necessary. This is a big advantage over the xUnit style, but if you don’t use these advantages, the transition to pytest makes no sense.

What is the difference between mocking and monkey patching?

I work with python and I'm a bit new to testing. I often see tests replacing an external dependency with a local method like so:
import some_module
def get_file_data():
return "here is the pretend file data"
some_module.get_file_data = get_file_data
# proceed to test
I see this referred to as "monkey patching" as in the question. I also see the word "mock" being used a lot alongside "monkey patching" or in what seem to be very similar scenarios.
Is there any difference between the two concepts?
Monkey patching is replacing a function/method/class by another at runtime, for testing purpses, fixing a bug or otherwise changing behaviour.
The unittest.mock library makes use of monkey patching to replace part of your software under test by mock objects. It provides functionality for writing clever unittests, such as:
It keeps record of how mock objects are being called, so you can test
the calling behaviour of your code with asserts.
A handy decorator patch() for the actual monkey patching.
You can make mock objects to return specific values (return_value), raise specific exceptions (side_effect).
Mocking of 'magic methds' (e.g. __str__).
You can use mocking, for example, to replace network I/O (urllib, requests) in a client, so unittests work without depending on an external server.

Python Testing - Reset all mocks?

When doing unit-testing with Python / PyTest, if you do you not have patch decorators or with patch blocks throughout your code, is there a way to reset all mocks at the end of every file / module to avoid inter-file test pollution?
It seems like something that is mocked in one Python test file remains mocked in other file with the same return value, which means my mocks are persisting between tests and files (when a patch decorator or with patch block is NOT used).
Is there any way around this other than patching? There wouldn't happen to be a mock.reset_all_mocks() or something like that, would there?
What I ended up doing was using the pytest-mock library. According to the Readme:
This plugin installs a mocker fixture which is a thin-wrapper around
the patching API provided by the excellent mock package, but with the
benefit of not having to worry about undoing patches at the end of a
test. (Emphasis added.)
So now I can do: mocker.patch.object(module, 'method', return_value='hi'), and the patch will be removed at the end of the test.
There is no need to use with any more so that this solution scales nicely if you have many mocks in one test or if you want to change mocks during the test.
After monkey-patching, I'm undoing it at the end of the test to avoid any leaking to other tests or limit the patching within the scope.
def test1(monkeypatch):
monkeypatch.setattr(...)
assert(...)
monkeypatch.undo()
why don't use monkeypatch ?
The monkeypatch function argument helps you to safely set/delete an attribute, dictionary item or environment variable or to modify sys.path for importing.
you can:
def test1(monkeypatch):
monkeypatch.setattr(.....

Passing custom Python objects to nosetests

I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests?
Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this:
testlib
\-testmoda
\-testmodb
\-testmodc
In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both.
Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc.
Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful.
I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing.
Thanks in advance!
Rob
P.S. The connection objects are not pickle-able. :(
Could you use a factory create the connections, then have the functions test_something1() (taking no arguments) use the factory to get a connection?
As far as I can tell, there is no easy way to simply pass custom objects to Nose.
However, as Matt pointed out there are some viable workarounds to achieve similar results.
Basically, do this:
Setup a data dictionary as a package level global
Add custom objects to that dictionary
Create some factory functions to return those custom objects or create new ones if they're present/suitable
Refactor the existing testlib\testmod* modules to use the factory

Is having a unit test that is mostly mock verification a smell?

I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.

Categories