Disable autouse fixtures on specific pytest marks - python

Is it possible to prevent the execution of "function scoped" fixtures with autouse=True on specific marks only?
I have the following fixture set to autouse so that all outgoing requests are automatically mocked out:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
But I have a mark called endtoend that I use to define a series of tests that are allowed to make external requests for more robust end to end testing. I would like to inject no_requests in all tests (the vast majority), but not in tests like the following:
#pytest.mark.endtoend
def test_api_returns_ok():
assert make_request().status_code == 200
Is this possible?

You can also use the request object in your fixture to check the markers used on the test, and don't do anything if a specific marker is set:
import pytest
#pytest.fixture(autouse=True)
def autofixt(request):
if 'noautofixt' in request.keywords:
return
print("patching stuff")
def test1():
pass
#pytest.mark.noautofixt
def test2():
pass
Output with -vs:
x.py::test1 patching stuff
PASSED
x.py::test2 PASSED

In case you have your endtoend tests in specific modules or classes you could also just override the no_requests fixture locally, for example assuming you group all your integration tests in a file called end_to_end.py:
# test_end_to_end.py
#pytest.fixture(autouse=True)
def no_requests():
return
def test_api_returns_ok():
# Should make a real request.
assert make_request().status_code == 200

I wasn't able to find a way to disable fixtures with autouse=True, but I did find a way to revert the changes made in my no_requests fixture. monkeypatch has a method undo that reverts all patches made on the stack, so I was able to call it in my endtoend tests like so:
#pytest.mark.endtoend
def test_api_returns_ok(monkeypatch):
monkeypatch.undo()
assert make_request().status_code == 200

It would be difficult and probably not possible to cancel or change the autouse
You can't canel an autouse, being as it's autouse. Maybe you could do something to change the autouse fixture based on a mark's condition. But this would be hackish and difficult.
possibly with:
import pytest
from _pytest.mark import MarkInfo
I couldn't find a way to do this, but maybe the #pytest.fixture(autouse=True) could get the MarkInfo and if it came back 'endtoend' the fixture wouldn't set the attribute. But you would also have to set a condition in the fixture parameters.
i.e.: #pytest.fixture(True=MarkInfo, autouse=True). Something like that. But I couldn't find a way.
It's recommended that you organize tests to prevent this
You could just separate the no_requests from the endtoend tests by either:
limit the scope of your autouse fixture
put the no_requests into a class
Not make it an auto use, just pass it into the params of each def you need it
Like so:
class NoRequests:
#pytest.fixture(scope='module', autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
def test_no_request1(self):
# do stuff here
# and so on
This is good practice. Maybe a different organization could help
But in your case, it's probably easiest to monkeypatch.undo()

Related

Local import as pytest fixture?

I need to import some functions locally within my tests (yes, the code base can be designed better to avoid this necessity, but let's assume we can't do that).
That means the first line of all my tests within a module looks like in this example:
def test_something():
from worker import process_message
process_message()
Now I wanted to make this more DRY by creating the following fixture:
#pytest.fixture(scope="module", autouse=True)
def process_message():
from worker import process_message
return process_message
But I always get the error
Fixture "process_message" called directly. Fixtures are not meant to
be called directly, but are created automatically when test functions
request them as parameters. See
https://docs.pytest.org/en/stable/explanation/fixtures.html for more
information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly
about how to update your code.
The linked documentation doesn't help me much.
How can I achieve what I want? I'd like to return the function handle obviously.
The reason this happens is that you are calling the fixture directly from one of your tests. I assume that your test code with the fixture looks something like this:
import pytest
#pytest.fixture(scope="module", autouse=True)
def process_message():
from worker import process_message
return process_message
def test_something():
process_message()
And then test_something fails with the specified exception.
The way to fix it is to add process_message as an argument to the test function, indicating that you are using it as a fixture:
def test_something(process_message):
process_message()
btw, since you have to specify process_message fixture in every test in order to call it, means there is no point in using the autouse=True and it can be removed.

Can a python unittest class add an assert statement for each test method in a parent class?

I have a unit test class that is a sub-class of python's unittest:
import unittest
class MyTestClass(unittest.TestCase):
run_parameters = {param1: 'on'}
def someTest(self):
self.assertEquals(something, something_else)
Now I want to create a child class that modifies, say run_parameters, and adds an additional assert statement on top of what was already written:
class NewWayToRunThings_TestClass(MyTestClass):
run_parameters = {param1: 'blue'}
# Want someTest, and all other tests in MyTestClass to now run
# with an additional assert statement
Is there someway to accomplish this so that each test runs with an additional assert statement to check that my parameter change worked properly across all my tests?
Yes there is but it may not be a good idea because:
assertions are hidden behind difficult to understand python magic
assertions aren't explicit
Could you update your methods to reflect the new contract, and expected param?
Also if a single parameter change breaks a huge amount of tests, where it is easier to dynamically patch the test class then it is to update the tests, the test suite may not be focused enough.

How to evade non-called fixture in Pytest?

Pytest suite has a brilliant feature of fixtures.
To make a reusable fixture, we mark a function with special decorator:
#pytest.fixture
def fix():
return {...}
It can later be used in our test through an argument name matching the original name of the fixture:
def test_me(fix):
fix['field'] = 'expected'
assert(fix['field'] == 'expected')
Although from time to time we might forget to specify the fixture in the arguments, and, since the name of the factory matches the name of the object produced, the test will silently apply changes to the factory object itself:
def test_me(): # notice no arg
fix['this is'] = 'a hell to debug'
Certainly, the outcome is undesirable. It would be nice, for instance, to be able to add some suffix to factory function, but the pytest.fixture decorator apparently does not have a means to override the name for the fixture.
Any other advice would suffice as well.
What is a recommended technique to protect ourselves from this kind of issue?
You can use autouse=True while defining fixture to invoke the fixture every time the scope of fixture starts. This feature was added in pytest 2.0.
For example:
import pytest
#pytest.fixture(scope='function',autouse=True)
def fixture_a():
return 5
def test_a():
assert fixture_a == 5
As you can see, I did not have to declare fixture as an argument in test_a to access it.
Documentation: https://docs.pytest.org/en/latest/reference.html#pytest-fixture
Code example: https://docs.pytest.org/en/latest/fixture.html#autouse-fixtures-xunit-setup-on-steroids

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

#Patch decorator is not compatible with pytest fixture

I have encountered something mysterious, when using patch decorator from mock package integrated with pytest fixture.
I have two modules:
-----test folder
-------func.py
-------test_test.py
in func.py:
def a():
return 1
def b():
return a()
in test_test.py:
import pytest
from func import a,b
from mock import patch,Mock
#pytest.fixture(scope="module")
def brands():
return 1
mock_b=Mock()
#patch('test_test.b',mock_b)
def test_compute_scores(brands):
a()
It seems that patch decorate is not compatible with pytest fixture. Does anyone have a insight on that? Thanks
When using pytest fixture with mock.patch, test parameter order is crucial.
If you place a fixture parameter before a mocked one:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(my_fixture, mocked_class):
then the mock object will be in my_fixture and mocked_class will be search as a fixture:
fixture 'mocked_class' not found
But, if you reverse the order, placing the fixture parameter at the end:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(mocked_class, my_fixture):
then all will be fine.
As of Python3.3, the mock module has been pulled into the unittest library. There is also a backport (for previous versions of Python) available as the standalone library mock.
Combining these 2 libraries within the same test-suite yields the above-mentioned error:
E fixture 'fixture_name' not found
Within your test-suite's virtual environment, run pip uninstall mock, and make sure you aren't using the backported library alongside the core unittest library. When you re-run your tests after uninstalling, you would see ImportErrors if this were the case.
Replace all instances of this import with from unittest.mock import <stuff>.
Hopefully this answer on an old question will help someone.
First off, the question doesn't include the error, so we don't really know what's up. But I'll try to provide something that helped me.
If you want a test decorated with a patched object, then in order for it to work with pytest you could just do this:
#mock.patch('mocked.module')
def test_me(*args):
mocked_module = args[0]
Or for multiple patches:
#mock.patch('mocked.module1')
#mock.patch('mocked.module')
def test_me(*args):
mocked_module1, mocked_module2 = args
pytest is looking for the names of the fixtures to look up in the test function/method. Providing the *args argument gives us a good workaround the lookup phase. So, to include a fixture with patches, you could do this:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
#mock.patch('mocked.module1')
def test_me(brands, *args):
mocked_module1 = args[0]
This worked for me running python 3.6 and pytest 3.0.6.
If you have multiple patches to be applied, order they are injected is important:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
# notice the order
#patch('my.module.my.class1')
#patch('my.module.my.class2')
def test_list_instance_elb_tg(mocked_class2, mocked_class1, brands):
pass
This doesn't address your question directly, but there is the pytest-mock plugin which allows you to write this instead:
def test_compute_scores(brands, mock):
mock_b = mock.patch('test_test.b')
a()
a) For me the solution was to use a with block inside the test function instead of using a #patch decoration before the test function:
class TestFoo:
def test_baa(self, my_fixture):
with patch(
'module.Class.function_to_patch',
MagicMock(return_value='mocked_result')
) as mocked_function_to_patch:
result= my_fixture.baa('mocked_input')
assert result == 'mocked_result'
mocked_function_to_patch.assert_has_calls([
call('mocked_input')
])
This solution does work inside classes (that are used to structure/group my test methods). Using the with block, you don't need to worry about the order of the arguments. I find it more explicit then the injection mechanism but the code becomes ugly if you patch more then one variable. If you need to patch many dependencies, that might be a signal that your tested function does too many things and that you should refactor it, e.g. by extracting some of the functionality to extra functions.
b) If you are outside classes and do want a patched object to be injected as extra argument in a test method... please note that #patch does not support to define the mock as second argument of the decoration:
#patch('path.to.foo', MagicMock(return_value='foo_value'))
def test_baa(self, my_fixture, mocked_foo):
does not work.
=> Make sure to pass the path as only argument to the decoration. Then define the return value inside the test function:
#patch('path.to.foo')
def test_baa(self, my_fixture, mocked_foo):
mocked_foo.return_value = 'foo_value'
(Unfortunately, this does not seem to work inside classes.)
First let inject the fixture(s), then let inject the variables of the #patch decorations (e.g. 'mocked_foo').
The name of the injected fixture 'my_fixture' needs to be correct. It needs to match the name of the decorated fixture function (or the explicit name used in the fixture decoration).
The name of the injected patch variable 'mocked_foo' does not follow a distinct naming pattern. You can choose it as you like, independent from the corresponding path of the #patch decoration.
If you inject several patched variables, note that the order is reversed: the mocked instance belonging to the last #patch decoration is injected first:
#patch('path.to.foo')
#patch('path.to.qux')
def test_baa(self, my_fixture, mocked_qux, mocked_foo):
mocked_foo.return_value = 'foo_value'
I had the same problem and solution for me was to use mock library in 1.0.1 version (before I was using unittest.mock in 2.6.0 version). Now it works like a charm :)

Categories