I'm using pytest to test my app.
pytest supports 2 approaches (that I'm aware of) of how to write tests:
In classes:
test_feature.py -> class TestFeature -> def test_feature_sanity
In functions:
test_feature.py -> def test_feature_sanity
Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module?
Which approach would you say is better and why?
This answer presents two compelling use-cases for a TestClass in pytest:
Joint parametrization of multiple test methods belonging to a given class.
Reuse of test data and test logic via subclass inheritance
Joint parametrization of multiple test methods belonging to a given class.
The pytest parametrization decorator, #pytest.mark.parametrize, can be used to make inputs available to multiple methods within a class. In the code below, the inputs param1 and param2 are available to each of the methods TestGroup.test_one and TestGroup.test_two.
"""test_class_parametrization.py"""
import pytest
#pytest.mark.parametrize(
("param1", "param2"),
[
("a", "b"),
("c", "d"),
],
)
class TestGroup:
"""A class with common parameters, `param1` and `param2`."""
#pytest.fixture
def fixt(self) -> int:
"""This fixture will only be available within the scope of TestGroup"""
return 123
def test_one(self, param1: str, param2: str, fixt: int) -> None:
print("\ntest_one", param1, param2, fixt)
def test_two(self, param1: str, param2: str) -> None:
print("\ntest_two", param1, param2)
$ pytest -s test_class_parametrization.py
================================================================== test session starts ==================================================================
platform linux -- Python 3.8.6, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /home/jbss
plugins: pylint-0.18.0
collected 4 items
test_class_parametrization.py
test_one a b 123
.
test_one c d 123
.
test_two a b
.
test_two c d
.
=================================================================== 4 passed in 0.01s ===================================================================
Reuse of test data and test logic via subclass inheritance
I'll use a modified version of code taken from another answer to demonstrate the usefulness of inheriting class attributes/methods from TestClass to TestSubclass:
# in file `test_example.py`
class TestClass:
VAR: int = 3
DATA: int = 4
def test_var_positive(self) -> None:
assert self.VAR >= 0
class TestSubclass(TestClass):
VAR: int = 8
def test_var_even(self) -> None:
assert self.VAR % 2 == 0
def test_data(self) -> None:
assert self.DATA == 4
Running pytest on this file causes four tests to be run:
$ pytest -v test_example.py
=========== test session starts ===========
platform linux -- Python 3.8.2, pytest-5.4.2, py-1.8.1
collected 4 items
test_example.py::TestClass::test_var_positive PASSED
test_example.py::TestSubclass::test_var_positive PASSED
test_example.py::TestSubclass::test_var_even PASSED
test_example.py::TestSubclass::test_data PASSED
In the subclass, the inherited test_var_positive method is run using the updated value self.VAR == 8, and the newly defined test_data method is run against the inherited attribute self.DATA == 4. Such method and attribute inheritance gives a flexible way to re-use or modify shared functionality between different groups of test-cases.
There are no strict rules regarding organizing tests into modules vs classes. It is a matter of personal preference. Initially I tried organizing tests into classes, after some time I realized I had no use for another level of organization. Nowadays I just collect test functions into modules (files).
I could see a valid use case when some tests could be logically organized into same file, but still have additional level of organization into classes (for instance to make use of class scoped fixture). But this can also be done just splitting into multiple modules.
Typically in unit testing, the object of our tests is a single function. That is, a single function gives rise to multiple tests. In reading through test code, it's useful to have tests for a single unit be grouped together in some way (which also allows us to e.g. run all tests for a specific function), so this leaves us with two options:
Put all tests for each function in a dedicated module
Put all tests for each function in a class
In the first approach we would still be interested in grouping all tests related to a source module (e.g. utils.py) in some way. Now, since we are already using modules to group tests for a function, this means that we should like to use a package to group tests for a source module.
The result is one source function maps to one test module, and one source module maps to one test package.
In the second approach, we would instead have one source function map to one test class (e.g. my_function() -> TestMyFunction), and one source module map to one test module (e.g. utils.py -> test_utils.py).
It depends on the situation, perhaps, but the second approach, i.e. a class of tests for each function you are testing, seems more clear to me. Additionally, if we are testing source classes/methods, then we could simply use an inheritance hierarchy of test classes, and still retain the one source module -> one test module mapping.
Finally, another benefit to either approach over just a flat file containing tests for multiple functions, is that with classes/modules already identifying which function is being tested, you can have better names for the actual tests, e.g. test_does_x and test_handles_y instead of test_my_function_does_x and test_my_function_handles_y.
Related
I am following this mini-tutorial/blog on pytest-mock. I can not understand how the mocker is working since there is no import for it - in particular the function declaration def test_mocking_constant_a(mocker):
import mock_examples.functions
from mock_examples.functions import double
def test_mocking_constant_a(mocker):
mocker.patch.object(mock_examples.functions, 'CONSTANT_A', 2)
expected = 4
actual = double() # now it returns 4, not 2
assert expected == actual
Somehow the mocker has the attributes/functions of pytest-mocker.mocker: in particular mocker.patch.object . But how can that be without the import statement?
The mocker variable is a Pytest fixture. Rather than using imports, fixtures are supplied using dependency injection - that is, Pytest takes care of creating the mocker object for you and supplies it to the test function when it runs the test.
Pytest-mock defines the "mocker" fixture here, using the Pytest fixture decorator. Here, the fixture decorator is used as a regular function, which is a slightly unusual way of doing it. A more typical way of using the fixture decorator would look something like this:
#pytest.fixture()
def mocker(pytestconfig: Any) -> Generator[MockerFixture, None, None]:
"""
Return an object that has the same interface to the `mock` module, but
takes care of automatically undoing all patches after each test method.
"""
result = MockerFixture(pytestconfig)
yield result
result.stopall()
The fixture decorator registers the "mocker" function with Pytest, and when Pytest runs a test with a parameter called "mocker", it inserts the result of the "mocker" function for you.
Pytest can do this because it uses Python's introspection features to view the list of arguments, complete with names, before calling the test function. It compares the names of the arguments with names of fixtures that have been registered, and if the names match, it supplies the corresponding object to that parameter of the test function.
I have a unit test class that is a sub-class of python's unittest:
import unittest
class MyTestClass(unittest.TestCase):
run_parameters = {param1: 'on'}
def someTest(self):
self.assertEquals(something, something_else)
Now I want to create a child class that modifies, say run_parameters, and adds an additional assert statement on top of what was already written:
class NewWayToRunThings_TestClass(MyTestClass):
run_parameters = {param1: 'blue'}
# Want someTest, and all other tests in MyTestClass to now run
# with an additional assert statement
Is there someway to accomplish this so that each test runs with an additional assert statement to check that my parameter change worked properly across all my tests?
Yes there is but it may not be a good idea because:
assertions are hidden behind difficult to understand python magic
assertions aren't explicit
Could you update your methods to reflect the new contract, and expected param?
Also if a single parameter change breaks a huge amount of tests, where it is easier to dynamically patch the test class then it is to update the tests, the test suite may not be focused enough.
Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.
I have following unittest code running via py.test.
Mere presence of the constructor make the entire class skip when running
py.test -v -s
collected 0 items / 1 skipped
Can anyone please explain to me this behaviour of py.test?
I am interested in understanding py.test behaviour, I know the constructor is not needed.
Thanks,
Zdenek
class TestClassName(object):
def __init__(self):
pass
def setup_method(self, method):
print "setup_method called"
def teardown_method(self, method):
print "teardown_method called"
def test_a(self):
print "test_a called"
assert 1 == 1
def test_b(self):
print "test_b called"
assert 1 == 1
The documentation for py.test says that py.test implements the following standard test discovery:
collection starts from the initial command line arguments which may be directories, filenames or test ids.
recurse into directories, unless they match norecursedirs
test_*.py or *_test.py files, imported by their package name.
Test prefixed test classes (without an __init__ method) [<-- notice this one here]
test_ prefixed test functions or methods are test items
So it's not that the constructor isn't needed, py.test just ignores classes that have a constructor. There is also a guide for changing the standard test discovery.
As already mentioned in the answer by Matti Lyra py.test purposely skips classes which have a constructor. The reason for this is that classes are only used for structural reasons in py.test and do not have any inherent behaviour, while when actually writing code it is the opposite and much rarer to not have an .__init__() method for a class. So in practice skipping a class with a constructor will likely be what was desired, usually it is just a class which happens to have a conflicting name.
Lastly py.test needs to instantiate the class in order to execute the tests. If the constructor takes any arguments it can't instantiate it, so again skipping is the right thing to do.
All the above answers clearly explain the underlying cause, I just thought to share my experience and workaround the warnings.
I got my test to work without the warnings by aliasing the imported Class
from app.core.utils import model_from_meta
from app.core.models import Panel, TestType as _TestType
from app.core.serializers import PanelSerializer, TestType as _TestTypeSerializer
def test_model_from_meta():
assert (Panel is model_from_meta(PanelSerializer))
assert (_TestType is model_from_meta(_TestTypeSerializer))
After importing the class using aliases the warnings no longer get printed
I hope this helps someone.
In my case, I just so happened to have a parameter's class names TestParams, which conflicts with pytest looking for classes beginning with the name test....
Solution: rename your own class
Source
I'm writing a plugin that will separate treat my unit tests, functional tests & integration tests differently.
My tests folder will have the following structure exactly:
/tests
-- /unit
-- /functional
-- /integration
Each unit test will reside in the unit directory and each functional test will reside in the functional directory and so on.
I am familiar with the Layers plugin but I'd rather have my tests follow a convention.
Which hook exactly should I use to inject the appropriate Layer before tests are run?
Should it be the loadTestsFromModule hook? Can you show me an example?
I'd also like to separate the summary report for each type of test.
Which hook should I use?
I got this working with nose2 by using the nose2 attrib plugin for discovery and some code copied from the nose1 attrib plugin which allowed me to decorate my tests.
Using the nose2 attrib plugin
You will see the nose2 attrib plugin allows for custom attributes to be defined on test functions and classes.
For this to work, you have to specify the attributes of the tests after defining the test function.
class MyTestCase(unittest.TestCase):
def test_function(self):
self.assertEqual(1+1, 2)
test_function.custom_attr1 = True
test_function.custom_attr2 = ['foo', 'bar']
Then you can run a set of filtered tests by specifying -A or --attribute as a nose2 command-line argument to list the attribute(s) you to match against your test suite. You can even use the expression command-line argument of -E or --eval-attribute which allows more complex Python expressions for matching test attributes.
e.g. nose2 -v -A custom_attr1
will run all tests which have a custom_attr1 specified with a truthy value.
Using decorators to specify test attributes
This wasn't quite good enough for me though because I didn't like the idea of defining these attributes on tests after their definition. I wanted to use a decorator instead but nose2 didn't have a built-in decorator for doing this.
I went to the nose1 source code for its attrib plugin and copied the source for the attr function.
def attr(*args, **kwargs):
"""Decorator that adds attributes to classes or functions
for use with the Attribute (-a) plugin.
"""
def wrap_ob(ob):
for name in args:
setattr(ob, name, True)
for name, value in kwargs.iteritems():
setattr(ob, name, value)
return ob
return wrap_ob
I put this into a test/attrib_util.py file. Now I can specify attributes using the decorator instead. My original test class code from above can be converted to the (IMO) simpler:
from test.attrib_util import attr
class MyTestCase(unittest.TestCase):
#attr('custom_attr1', custom_attr2=['foo', 'bar'])
def test_function(self):
self.assertEqual(1+1, 2)
You will notice that the attributes can be specified as either args or kwargs; all args will get a default value of True.
You can also even use this attr decorator on a test class or base class and the attributes will be applied to all test functions defined within. This allows for very easy separation of unit and functional tests.
from test.attrib_util import attr
#attr('functional')
class FunctionalTestCase(unittest.TestCase):
pass
class MyFunctionalCase(FunctionalTestCase):
def test_function(self):
print 'this will be considered a "functional" test function'
You don't need to write a plug-in, the built-in attr module is designed for this purpose. It does not depend on your file hierarchy, however. Instead, you mark individual tests as unit, functional, or integration. This would look like:
from nose.plugins import attrib
#attrib.attr("functional")
class FunctionalTestCase(unittest.TestCase):
pass
To run only the functional tests, you would then do:
nosetests -a functional
If I were creating this test layout, I would probably have 3 unittest.TestCase subclasses, already marked with "unit", "functional", and "integration". New tests could easily inherit the proper test type.
If you already have the tests sorted into directories (as you mentioned), you could write a plugin that uses the wantDirectory method.
import os.path
from nose.plugins import Plugin
class TestCategory(Plugin):
"""
Run tests in a defined category (unittest, functional, integration. Always
runs uncategorized tests.
"""
def wantDirectory(self, dirname):
dirname = os.path.basename(dirname)
if (dirname in ('unit', 'functional', 'integration') and
dirname != self.category):
return False
return None
You will want to write options() and configure() methods for this plug-in to deal with enabling and disabling it and gleaning the user's choice of category. When running nosetests you would choose from the three categories:
nosetests --category functional
Since only one test category is run at a time, you would get a separate report for each test category. You could always, of course, run all tests by not enabling this plugin.
(adding as a different answer because it is a completely different approach).