How are pytest fixure scopes intended to work? - python

I want to use pytest fixtures to prepare an object I want to use across a set of tests.
I follow the documentation and create a fixture in something_fixture.py with its scope set to session like this:
import pytest
#pytest.fixture(scope="session")
def something():
return 'something'
Then in test_something.py I try to use the fixture like this:
def test_something(something):
assert something == 'something'
Which does not work, but if I import the fixture like this:
from tests.something_fixture import something
def test_something(something):
assert something == 'something'
the test passes...
Is this import necessary? Because to me this is not clear according to the documentation.

This session-scoped fixture should be defined in a conftest.py module, see conftest.py: sharing fixtures across multiple files in the docs.
The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them).
By writing the fixture in something_fixture.py it was defined somewhere that went "unnoticed" because there was no reason for Python to import this module. The default test collection phase considers filenames matching these glob patterns:
- test_*.py
- *_test.py
Since it's a session-scoped feature, define it instead in a conftest.py file, so it will be created at test collection time and available to all tests.
You can remove the import statement from tests.something_fixture import something. In fact the "tests" subdirectory generally doesn't need to be importable at all.

Related

Disable autouse fixtures on specific pytest marks

Is it possible to prevent the execution of "function scoped" fixtures with autouse=True on specific marks only?
I have the following fixture set to autouse so that all outgoing requests are automatically mocked out:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
But I have a mark called endtoend that I use to define a series of tests that are allowed to make external requests for more robust end to end testing. I would like to inject no_requests in all tests (the vast majority), but not in tests like the following:
#pytest.mark.endtoend
def test_api_returns_ok():
assert make_request().status_code == 200
Is this possible?
You can also use the request object in your fixture to check the markers used on the test, and don't do anything if a specific marker is set:
import pytest
#pytest.fixture(autouse=True)
def autofixt(request):
if 'noautofixt' in request.keywords:
return
print("patching stuff")
def test1():
pass
#pytest.mark.noautofixt
def test2():
pass
Output with -vs:
x.py::test1 patching stuff
PASSED
x.py::test2 PASSED
In case you have your endtoend tests in specific modules or classes you could also just override the no_requests fixture locally, for example assuming you group all your integration tests in a file called end_to_end.py:
# test_end_to_end.py
#pytest.fixture(autouse=True)
def no_requests():
return
def test_api_returns_ok():
# Should make a real request.
assert make_request().status_code == 200
I wasn't able to find a way to disable fixtures with autouse=True, but I did find a way to revert the changes made in my no_requests fixture. monkeypatch has a method undo that reverts all patches made on the stack, so I was able to call it in my endtoend tests like so:
#pytest.mark.endtoend
def test_api_returns_ok(monkeypatch):
monkeypatch.undo()
assert make_request().status_code == 200
It would be difficult and probably not possible to cancel or change the autouse
You can't canel an autouse, being as it's autouse. Maybe you could do something to change the autouse fixture based on a mark's condition. But this would be hackish and difficult.
possibly with:
import pytest
from _pytest.mark import MarkInfo
I couldn't find a way to do this, but maybe the #pytest.fixture(autouse=True) could get the MarkInfo and if it came back 'endtoend' the fixture wouldn't set the attribute. But you would also have to set a condition in the fixture parameters.
i.e.: #pytest.fixture(True=MarkInfo, autouse=True). Something like that. But I couldn't find a way.
It's recommended that you organize tests to prevent this
You could just separate the no_requests from the endtoend tests by either:
limit the scope of your autouse fixture
put the no_requests into a class
Not make it an auto use, just pass it into the params of each def you need it
Like so:
class NoRequests:
#pytest.fixture(scope='module', autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
def test_no_request1(self):
# do stuff here
# and so on
This is good practice. Maybe a different organization could help
But in your case, it's probably easiest to monkeypatch.undo()

How to get django's unittest TestLoader to find and run my doctests?

In Django, my tests are a set of test_foo.py files inside my_django_app/tests/, which each contain a TestCase subclass, and which django automatically finds and runs.
I have a bunch of utility modules with simple doctests that I would like to be included with my test suite. I tried using doctest.DocTestSuite() to define test suites in my_django_app/tests/test_doctests.py, but django's test runner does not find the new tests in that module.
Is there a way I can create a TestCase class that calls my doctests, or somehow otherwise define a new tests/test_foo.py module that would run these tests?
The automagic of Django unittests discovery looks for a load_tests function in your test_foo module and will run it. So you can use that to add your doctests to the test suite ...
import doctest
import module_with_doctests
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(module_with_doctests))
return tests
Also, due to a bug(?) in unittest your load_tests function won't be run unless your test_foo module also defines a TestCase-derived class like so:
class DoNothingTest(TestCase):
"""Encourage Django unittests to run `load_tests()`."""
def test_example(self):
self.assertTrue(True)
I solved this by creating a new module, my_django_app/tests/test_doctests.py, that looks like:
import doctest
import unittest
# These are my modules that contain doctests:
from util import bitwise
from util import text
from util import urlutil
DOCTEST_MODULES = (
bitwise,
text,
urlutil,
)
# unittest.TestLoader will call this when it finds this module:
def load_tests(*args, **kwargs):
test_all_doctests = unittest.TestSuite()
for m in DOCTEST_MODULES:
test_all_doctests.addTest(doctest.DocTestSuite(m))
return test_all_doctests
Django uses the builtin unittest TestLoader, which, during test discovery, will call load_tests() on your test module. So we define load_tests which creates a test suite out of all of the doctests.
import django.test.runner
testsuite = django.test.runner.DiscoverRunner().build_suite()
but, as best I can tell, basic unittest discover produces the same collection
import unittest
testsuite = unittest.TestLoader().discover('.')
unrelated
I did notice that unittest and django.test appear to use an internal attribute TestCase._testMethodName differently, in unittest, this is the testcase module+class namespace, in django, this appeared to be a random testcase method name, with the module and class being attributes of self.__module__. probably try to avoid needing to poke around in internals anyway though

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

Can pytest fixtures be combined?

Can one fixture build on another in pytest? I have a very simple fixture called "cleaner" defined as:
import pytest
from mypackage import db
#pytest.fixture()
def cleaner(request):
def finalizer():
db.clear()
request.addfinalizer(finalizer)
then in my setup.cfg I have:
[pytest]
norecursedirs = .git venv
usefixtures = cleaner
This results in the database being truncated after each test, which is great. But now I want other fixtures I make to also call the finalizer from cleaner. Is there a way to define another fixture that somehow expands on or calls cleaner?
You have to declare that your other fixture depends on cleaner explicitly:
import pytest
#pytest.fixture
def cleaner(request):
def finalizer():
print '\n"cleaner" finalized'
print '\n"cleaner" fixture'
request.addfinalizer(finalizer)
#pytest.fixture
def other(cleaner):
print '\n"other" fixture'
def test_foo(other):
pass
Running this with py.test -s -v produces:
test_foo.py#16::test_foo
"cleaner" fixture
"other" fixture
PASSED
"cleaner" finalized

Why cant unittest.TestCases see my py.test fixtures?

I'm trying to use py.test's fixtures with my unit tests, in conjunction with unittest. I've put several fixtures in a conftest.py file at the top level of the project (as described here), decorated them with #pytest.fixture, and put their names as arguments to the test functions that require them.
The fixtures are registering correctly, as shown by py.test --fixtures test_stuff.py, but when I run py.test, I get NameError: global name 'my_fixture' is not defined. This appears to only occur when I use subclasses of unittest.TestCase—but the py.test docs seem to say that it plays well with unittest.
Why can't the tests see the fixtures when I use unittest.TestCase?
Doesn't work:
conftest.py
#pytest.fixture
def my_fixture():
return 'This is some fixture data'
test_stuff.py
import unittest
import pytest
class TestWithFixtures(unittest.TestCase):
def test_with_a_fixture(self, my_fixture):
print(my_fixture)
Works:
conftest.py
#pytest.fixture()
def my_fixture():
return 'This is some fixture data'
test_stuff.py
import pytest
class TestWithFixtures:
def test_with_a_fixture(self, my_fixture):
print(my_fixture)
I'm asking this question more out of curiosity; for now I'm just ditching unittest altogether.
While pytest supports receiving fixtures via test function arguments
for non-unittest test methods, unittest.TestCase methods cannot
directly receive fixture function arguments as implementing that is
likely to inflict on the ability to run general unittest.TestCase test
suites.
From the note section at the bottom of:
https://pytest.org/en/latest/unittest.html
It's possible to use fixtures with unittest.TestCasees. See that page for more information.
You can use the pytest fixtures in unittest.TestCase with the pytest option autouse. However, if you use the test_ for the unit method using the fixture the following error will appear:
Fixtures are not meant to be called directly,...
### conftest.py
#pytest.fixture
def my_fixture():
return 'This is some fixture data'
One solution is to use a prepare_fixture method to set fixtures as an attribute of the TestWithFixtures class, so that fixtures are available to all unit test methods.
### test_stuff.py
import unittest
import pytest
class TestWithFixtures(unittest.TestCase):
#pytest.fixture(autouse=True)
def prepare_fixture(self, my_fixture):
self.myfixture = my_fixture
def test_with_a_fixture(self):
print(self.myfixture)
define the fixture as an accessible variable, (like input in the following example). To define it, use request.cls.VARIABLE_NAME_YOU_DEFINE = RETURN_VALUE
use #pytest.mark.usefixtures("YOUR_FIXTURE") to use fixture outside of the unittest class, inside the unittest class, access the fixture by self.VARIABLE_NAME_YOU_DEFINE.
e.g.
import unittest
import pytest
#pytest.fixture(scope="class")
def test_input(request):
request.cls.input = {"key": "value"}
#pytest.mark.usefixtures("test_input")
class MyTestCase(unittest.TestCase):
def test_something(self):
self.assertEqual(self.input["key"], "value")

Categories