Report multiple tests from a single function with pytest - python

I currently have a simple test which instantiates a bunch of similar objects and executes a method to ensure the method does not throw any exceptions:
class TestTemplates(object):
def test_generate_all(self):
'''Generate all the templates and ensure none of them throw validation errors'''
for entry_point in pkg_resources.iter_entry_points('cloudformation.template'):
object = entry_point.load()
object().build().to_json()
This is reported in the text output of pytest as a single test:
test/test_templates.py::TestTemplates::test_generate_all PASSED
Also in the junit XML:
<testcase classname="test.test_templates.TestTemplates" file="test/test_templates.py" line="31" name="test_generate_all" time="0.0983951091766"></testcase>
Is it possible for each object tested to be reported as a separate test without manually defining a test function for each object?

I'd define your list of objects as a fixture, then pass that list to a parametrized test:
#pytest.fixture
def entry_point_objects()
eps = pkg_resources.iter_entry_points('cloudformation.template')
return [ep.load() for ep in eps]
#pytest.mark.parametrize('obj', entry_point_objects())
def test_generate_all(obj):
obj().build().to_json()

Related

How does a pytest mocker work given there is no import statement for it?

I am following this mini-tutorial/blog on pytest-mock. I can not understand how the mocker is working since there is no import for it - in particular the function declaration def test_mocking_constant_a(mocker):
import mock_examples.functions
from mock_examples.functions import double
def test_mocking_constant_a(mocker):
mocker.patch.object(mock_examples.functions, 'CONSTANT_A', 2)
expected = 4
actual = double() # now it returns 4, not 2
assert expected == actual
Somehow the mocker has the attributes/functions of pytest-mocker.mocker: in particular mocker.patch.object . But how can that be without the import statement?
The mocker variable is a Pytest fixture. Rather than using imports, fixtures are supplied using dependency injection - that is, Pytest takes care of creating the mocker object for you and supplies it to the test function when it runs the test.
Pytest-mock defines the "mocker" fixture here, using the Pytest fixture decorator. Here, the fixture decorator is used as a regular function, which is a slightly unusual way of doing it. A more typical way of using the fixture decorator would look something like this:
#pytest.fixture()
def mocker(pytestconfig: Any) -> Generator[MockerFixture, None, None]:
"""
Return an object that has the same interface to the `mock` module, but
takes care of automatically undoing all patches after each test method.
"""
result = MockerFixture(pytestconfig)
yield result
result.stopall()
The fixture decorator registers the "mocker" function with Pytest, and when Pytest runs a test with a parameter called "mocker", it inserts the result of the "mocker" function for you.
Pytest can do this because it uses Python's introspection features to view the list of arguments, complete with names, before calling the test function. It compares the names of the arguments with names of fixtures that have been registered, and if the names match, it supplies the corresponding object to that parameter of the test function.

pytest access parameters from function scope fixture

Let's assume I have the following code:
#pytest.mark.parametrize("argument", [1])
def test_func(self, function_context, argument)
And I have the following function scope fixture:
#pytest.fixture(scope='function')
def function_context(session_context):
# .... do something ....
Is it possible to access the current function argument from within the function_context fixture?
In my case - I want to get the value 1 that is being passed in parametrize from within function_context.
Fixtures in pytest are instantiated before the actual tests are ran, so it shouldn't be possible to access the test function argument at the fixture definition stage. However, I can think of two ways to bypass this:
1. Monkeypatching
You can monkeypatch the fixture, i.e. temporarily change some of its attributes, based on the parameter of the function that uses this fixture. For example:
#pytest.fixture(scope='function')
def function_context(session_context):
# .... do something ....
#pytest.mark.parametrize("argument", [1])
def test_func(self, function_context, argument, monkeypatch):
monkeypatch.setattr(function_context, "number", argument) # assuming you want to change the attribute "number" of the function context
# .... do something ....
Although your fixture is valid for the scope of the function only anyhow, monkeypatching is also only valid for a single run of the test.
2. Parametrizing the fixture instead of the test function
Alternatively, you can also choose to parametrize the fixture itself instead of the test_func. For example:
#pytest.fixture(scope='function', params=[0, 1])
def function_context(session_context, request):
param = requests.param # now you can use param in the fixture
# .... do something ...
def test_func(self, function_context):
# .... do something ...

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

pytest fixture is always returning a function

I want to be able to return a value from a fixture to multiple tests/test classes, but the value that gets passed is a function.
Here's my code:
import pytest
#pytest.fixture()
def user_setup():
user = {
'name': 'chad',
'id': 1
}
return user
#pytest.mark.usefixtures('user_setup')
class TestThings:
def test_user(self):
assert user_setup['name'] == 'chad'
The output is:
=================================== FAILURES ===================================
_____________________________ TestThings.test_user _____________________________
self = <tests.test_again.TestThings instance at 0x10aed6998>
def test_user(self):
> assert user_setup['name'] == 'chad'
E TypeError: 'function' object has no attribute '__getitem__'
tests/test_again.py:14: TypeError
=========================== 1 failed in 0.02 seconds ===========================
But if I rewrite my test so that it doesn't use the usefixtures decorator, it works as expected:
def test_user(user_setup):
assert user_setup['name'] == 'chad'
Any ideas why it's not working when I try to use the decorator method?
When you use the #pytest.mark.usefixtures marker you still need to provide a similarly named input argument if you want that fixture to be injected in to your test function.
As described in the py.test docs for fixtures:
The name of the fixture function can later be referenced to cause its
invocation ahead of running tests... Test functions can directly use
fixture names as input arguments in which case the fixture instance
returned from the fixture function will be injected.
So just using the #pytest.mark.usefixtures decorator will only invoke the function. Providing an input argument will give you the result of that function.
You only really need to use #pytest.mark.usefixtures when you want to invoke a fixture but don't want to have it as an input argument to your test. As described in the py.test docs.
The reason you are getting an exception that talks about user_setup being a function is because inside your test_user function the name user_setup actually refers to the function you defined earlier in the file. To get your code to work as you expect you would need to add an argument to the test_user function:
#pytest.mark.usefixtures('user_setup')
class TestThings:
def test_user(self, user_setup):
assert user_setup['name'] == 'chad'
Now from the perspective of the test_user function the name user_setup will refer to the function argument which will be the returned value of the fixture as injected by py.test.
But really you just don't need to use the #pytest.mark.usefixtures decorator at all.
In both cases, in the global scope user_setup refers to the function. The difference is, in your nonfixture version, you are creating a parameter with the same name, which is a classic recipe for confusion.
In that nonfixture version, within in the scope of test_user, your user_setup identifier refers to whatever it is you are passing it, NOT the function in the global scope.
I think you probably mean to be calling user_setup and subscripting the result like
assert user_setup()['name'] == 'chad'

How to make pytest display a custom string representation for fixture parameters?

When using builtin types as fixture parameters, pytest prints out the value of the parameters in the test report. For example:
#fixture(params=['hello', 'world']
def data(request):
return request.param
def test_something(data):
pass
Running this with py.test --verbose will print something like:
test_example.py:7: test_something[hello]
PASSED
test_example.py:7: test_something[world]
PASSED
Note that the value of the parameter is printed in square brackets after the test name.
Now, when using an object of a user-defined class as parameter, like so:
class Param(object):
def __init__(self, text):
self.text = text
#fixture(params=[Param('hello'), Param('world')]
def data(request):
return request.param
def test_something(data):
pass
pytest will simply enumerate the number of values (p0, p1, etc.):
test_example.py:7: test_something[p0]
PASSED
test_example.py:7: test_something[p1]
PASSED
This behavior does not change even when the user-defined class provides custom __str__ and __repr__ implementations. Is there any way to make pytest display something more useful than just p0 here?
I am using pytest 2.5.2 on Python 2.7.6 on Windows 7.
The fixture decorator takes an ids parameter which can be used to override the automatic parameter name:
#fixture(params=[Param('hello'), Param('world')], ids=['hello', 'world'])
def data(request):
return request.param
As shown it takes a list of names to use for the corresponding item in the params list.

Categories