Many pytest fixtures vs. one large "container" fixture - python

We have a large python project which is tested using pytest, currently with unittest style classes, and we started migrating it to module-based, function style tests.
We are having a debate whether we should:
Split our large test base-class to many small, independent pytest fixtures; or
Maintain a one large fixture which lazily imports all other fixtures.
Pros for many fixtures:
Modular and probably easy to maintain
Each test only uses what it needs
Pros for one large fixture:
Less boilerplate code, each test only has one extra keyword arg
What should we do? Any opinions are welcome as long as they are explained. Thanks :)

The use of specific fixtures has a lot of advantages over the big one. Thanks to this pytest gained its popularity.
Different fixtures can reproduce various mutually exclusive states of the system under test. This is useful when you want to test various cases of your system behavior. Single fixture does not give such flexibility.
Pytest allows you to flexibly assemble a call to a fixture, when one fixture uses the results of the execution of another. Decomposition is an effective programming pattern and tests are no exception.
Fixtures in pytest can be parameterized, this is a very useful functionality, but its application will be impossible if you make one big fixture for all tests.
Conftest.py is a directory specific in pytest. So fixtures in pytest can be global (located in conftest), local (located inside the test module) and intermediate (located in conftest at the package level). This allows you to reuse common code, while not losing flexibility in specific cases.
Fixtures have scope (function, class, module, session), which gives additional flexibility.
The root idea of ​​the pytest framework is the use of fixtures at those levels where necessary. This is a big advantage over the xUnit style, but if you don’t use these advantages, the transition to pytest makes no sense.

Related

How to run a scoped function before all pytest fixtures of that scope?

I saw this question, asking the same about doing things before tests.
I need to do things before fixtures.
For example, I am setting up a dockerized environment, which I have to clean before building. To make things more complicated, I am using this plugin which defines fixtures I can't control or change, and I need something that comes before all fixtures (and does docker-compose down and other cleanup).
For example, when pytest starts, run the common per-session pre-step, then the fixtures, then the per-module pre-step, then the fixtures, and so on.
Is this a supported hook in pytest?
I couldn't find a relevant doc.
As #MrBean Bremen stated, pytest_sessionstart does the trick.
An annoying problem with that, is I can't use fixtures inside (naturally), with Argument(s) {'fixture_name'} are declared in the hookimpl but can not be found in the hookspec
I tried to fix that with pytest-dependency, but it doesn't seem to work on fixtures, only on tests.
I am left with the hacky workaround to extract required data for sessionstart to functions, and call them in sessionstart as well as in similarly named fixtures.
It works, but it is ugly.
Would accept a cleaner solution over this one

Using Fixtures vs passing method as argument

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?
One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.
Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Setting up environment for testing in Python

I'm writing integration tests using plain unittest in Python (import unittest) and are creating stubs for some external services. Now I want to run the same tests with a real implementation; but also keep the stubs. That way I can run the tests with and without the stubs and compare behaviour.
I'm running my tests both from SetupTools and through PyCharm. Is there some generic way for me to set/inject/bootstrap a parameter which tells my code wether to use the stub or the real implementation? Command line preferrable. Any pointers appreciated. :)
It sounds like you are looking for a mocking framework. Mocking frameworks allow you to create a 'stub' for the method from within your test. This is good because you don't want to be inserting any test specific code into your actual code.
One of the more popular mocking frameworks for python 2.* is python-mock (in fact it comes with python 3) So you can write the code as:
from mock import MagicMock
test_foo_mocked():
bar = MagicMock()
bar.return_value = 'fake_val'
assertEqual(bar(), 'fake_val')
test_foo_real():
assertEqual(bar(), 'real_val')
Side Note:
I would really recommend that you think of these as completely unrelated tests. There are many benefits to keeping your integration tests separate from your unit tests. Thinking of them as two different ways of running the 'same test' may encourage you to write bad tests. Unit tests should be able to test things that would be difficult or impossible to test through integration tests and vice versa.

Is having a unit test that is mostly mock verification a smell?

I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.

Unit-testing with dependencies between tests

How do you do unit testing when you have
some general unit tests
more sophisticated tests checking edge cases, depending on the general ones
To give an example, imagine testing a CSV-reader (I just made up a notation for demonstration),
def test_readCsv(): ...
#dependsOn(test_readCsv)
def test_readCsv_duplicateColumnName(): ...
#dependsOn(test_readCsv)
def test_readCsv_unicodeColumnName(): ...
I expect sub-tests to be run only if their parent test succeeds. The reason behind this is that running these tests takes time. Many failure reports that go back to a single reason wouldn't be informative, either. Of course, I could shoehorn all edge-cases into the main test, but I wonder if there is a more structured way to do this.
I've found these related but different questions,
How to structure unit tests that have dependencies?
Unit Testing - Is it bad form to have unit test calling other unit tests
UPDATE:
I've found TestNG which has great built-in support for test dependencies. You can write tests like this,
#Test{dependsOnMethods = ("test_readCsv"))
public void test_readCsv_duplicateColumnName() {
...
}
Personally, I wouldn't worry about creating dependencies between unit tests. This sounds like a bit of a code smell to me. A few points:
If a test fails, let the others fail to and get a good idea of the scale of the problem that the adverse code change made.
Test failures should be the exception rather than the norm, so why waste effort and create dependencies when the vast majority of the time (hopefully!) no benefit is derived? If failures happen often, your problem is not with unit test dependencies but with frequent test failures.
Unit tests should run really fast. If they are running slow, then focus your efforts on increasing the speed of these tests rather than preventing subsequent failures. Do this by decoupling your code more and using dependency injection or mocking.
Proboscis is a python version of TestNG (which is a Java library).
See packages.python.org/proboscis/
It supports dependencies, e.g.
#test(depends_on=[test_readCsv])
public void test_readCsv_duplicateColumnName() {
...
}
I'm not sure what language you're referring to (as you don't specifically mention it in your question) but for something like PHPUnit there is an #depends tag that will only run a test if the depended upon test has already passed.
Depending on what language or unit testing you use there may also be something similar available
I have implemented a plugin for Nose (Python) which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
You can find it here: nosedep.
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
You may want use pytest-dependency. According to theirs documentation code looks elegant:
import pytest
#pytest.mark.dependency()
#pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
#pytest.mark.dependency()
def test_b():
pass
#pytest.mark.dependency(depends=["test_a"])
def test_c():
pass
#pytest.mark.dependency(depends=["test_b"])
def test_d():
pass
#pytest.mark.dependency(depends=["test_b", "test_c"])
def test_e():
pass
Please note, it is plugin for pytest, not unittest which is part of python itself. So, you need 2 more dependencies (f.e. add into requirements.txt):
pytest==5.1.1
pytest-dependency==0.4.0
According to best practices and unit testing principles unit test should not depend on other ones.
Each test case should check concrete isolated behavior.
Then if some test case fail you will know exactly what became wrong with our code.

Categories