How to use xunit-style setup_function - python

The docs say:
If you would rather define test functions directly at module level you can also use the following functions to implement fixtures:
def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.
"""
but actually it's unclear how you're supposed to use them.
I tried putting them in my test file exactly as shown above, but they don't get called. Looking closer at them, with that function arg they look like decorators. So I tried to find a way to do:
#pytest.setup_function
def my_setup():
# not called
I couldn't find anywhere to import one from as a decorator though, so that can't be right.
I found this thread on grokbase where someone explains you can do:
#pytest.fixture(autouse=True)
def setup_function(request):
# this one gets called
It works, but it doesn't seem related to the docs any more... there's no reason to call it setup_function here.

You just implement the body of setup_function(), that function is called for each function whose name starts with test_ and the function is handed in as a parameter:
def setup_function(fun):
print ("in setup function: " + fun.__name__)
def test_test():
assert False
This will give as output, when run with py.test:
============================= test session starts ==============================
platform linux2 -- Python 2.7.6 -- pytest-2.3.5
collected 1 items
test/test_test.py F
=================================== FAILURES ===================================
__________________________________ test_test ___________________________________
def test_test():
> assert False
E assert False
test/test_test.py:7: AssertionError
------------------------------- Captured stdout --------------------------------
in setup function: test_test
=========================== 1 failed in 0.01 seconds ===========================
The line before the last one shows the output from the actual call to setup_function()
A slightly more useful example, actually doing something that influences the test function:
def setup_function(function):
function.answer = 17
def teardown_function(function):
del function.answer
def test_modlevel():
assert modlevel[0] == 42
assert test_modlevel.answer == 17
This was taken from py.test's own tests, always a good (and hopefully complete) set of examples of all the features of py.test.

Related

pytest - how to assert if a method of a class is called inside a method

I am trying to figure out how to know if a method of class is being called inside a method.
following is the code for the unit test:
# test_unittes.py file
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation()
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
inside the purge_s3_file method bulk_delete is called but when asserting it says that the method was expected to be called and it is not called!
mocker = <pytest_mock.plugin.MockerFixture object at 0x7fac28d57208>
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
> mock_s3fs.bulk_delete.assert_called()
E AssertionError: Expected 'bulk_delete' to have been called.
I don't know how to test this and how to assert if the method is called!
Below you can find the method being testing:
# segments.py file
import s3fs
def purge_s3_files(self, prefix=None):
bucket = 'sample_bucket'
files = []
fs = s3fs.S3FileSystem()
if fs.exists(f'{bucket}/{prefix}'):
files.extend(fs.ls(f'{bucket}/{prefix}'))
else:
print(f'Directory {bucket}/{prefix} does not exist in s3.')
print(f'Purging S3 files from {bucket}/{prefix}.')
print(*files, sep='\n')
fs.bulk_delete(files)
The problem you are facing is that the mock you are setting up is mocking out the class, and you are not using the instance to use and check your mocks. In short, this should fix your problem (there might be another issue explained further below):
m = mocker.patch('s3fs.S3FileSystem')
mock_s3fs = m.return_value # (or mock_s3())
There might be a second problem in how you are not referencing the right path to what you want to mock.
Depending on what your project root is considered (considering your comment here) your mock would need to be referenced accordingly:
mock('app.segments.s3fs.S3FileSystem')
The rule of thumb is that you always want to mock where you are testing.
If you are able to use your debugger (or output to your console) you will (hopefully :)) see that your expected call count will be inside the return_value of your mock object. Here is a snippet from my debugger using your code:
You will see the call_count attribute set to 1. Pointing back to what I mentioned at the beginning of the answer, by making that change, you will now be able to use the intended mock_s3fs.bulk_delete_assert_called().
Putting it together, your working test with modification runs as expected (note, you should also set up the expected behaviour and assert the other fs methods you are calling in there):
def test_purge_s3_files(mocker):
m = mocker.patch("app.segments.s3fs.S3FileSystem")
mock_s3fs = m.return_value # (or m())
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
Python mock testing depends on where the mock is being used. So you have the mock the function calls where it is imported.
Eg.
app/r_executor.py
def r_execute(file):
# do something
But the actual function call happens in another namespace ->
analyse/news.py
from app.r_executor import r_execute
def analyse():
r_execute(file)
To mock this I should use
mocker.patch('analyse.news.r_execute')
# not mocker.patch('app.r_executor.r_execute')

How can I rewrite this fixture call so it won't be called directly?

I defined the following fixture in a test file:
import os
from dotenv import load_dotenv, find_dotenv
from packaging import version # for comparing version numbers
load_dotenv(find_dotenv())
VERSION = os.environ.get("VERSION")
API_URL = os.environ.get("API_URL")
#pytest.fixture()
def skip_before_version():
"""
Creates a fixture that takes parameters
skips a test if it depends on certain features implemented in a certain version
:parameter target_version:
:parameter type: string
"""
def _skip_before(target_version):
less_than = version.parse(current_version) < version.parse(VERSION)
return pytest.mark.skipif(less_than)
return _skip_before
skip_before = skip_before_version()("0.0.1")
I want to use skip_before as a fixture in certain tests. I call it like this:
##skip_before_version("0.0.1") # tried this before and got the same error, so tried reworking it...
#when(parsers.cfparse("{categories} are added as categories"))
def add_categories(skip_before, create_tree, categories): # now putting the fixture alongside parameters
pass
When I run this, I get the following error:
Fixture "skip_before_version" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
How is this still being called directly? How can I fix this?
If I understand your goal correctly, you want to be able to skip tests based on a version restriction specifier. There are many ways to do that; I can suggest an autouse fixture that will skip the test based on a custom marker condition. Example:
import os
import pytest
from packaging.specifiers import SpecifierSet
VERSION = "1.2.3" # read from environment etc.
#pytest.fixture(autouse=True)
def skip_based_on_version_compat(request):
# get the version_compat marker
version_compat = request.node.get_closest_marker("version_compat")
if version_compat is None: # test is not marked
return
if not version_compat.args: # no specifier passed to marker
return
spec_arg = version_compat.args[0]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
The fixture skip_based_on_version_compat will be invoked on each test, but only do something if the test is marked with #pytest.mark.version_compat. Example tests:
#pytest.mark.version_compat(">=1.0.0")
def test_current_gen():
assert True
#pytest.mark.version_compat(">=2.0.0")
def test_next_gen():
raise NotImplementedError()
With VERSION = "1.2.3", the first test will be executed, the second one will be skipped. Notice the invocation of pytest.skip to immediately skip the test. Returning pytest.mark.skip in the fixture will bring you nothing since the markers are already evaluated long before that.
Also, I noticed you are writing gherkin tests (using pytest-bdd presumably). With the above approach, skipping the whole scenarios should be also possible:
#pytest.mark.version_compat(">=1.0.0")
#scenario("my.feature", "my scenario")
def test_scenario():
pass
Alternatively, you can mark the scenarios in feature files:
Feature: Foo
Lorem ipsum dolor sit amet.
#version_compat(">=1.0.0")
Scenario: doing future stuff
Given foo is implemented
When I see foo
Then I do bar
and use pytest-bdd-own hooks:
def pytest_bdd_apply_tag(tag, function):
matcher = re.match(r'^version_compat\("(?P<spec_arg>.*)"\)$', tag)
spec_arg = matcher.groupdict()["spec_arg"]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
marker = pytest.mark.skip(
reason=f"Current version {VERSION} doesn't match restriction {spec_arg!r}."
)
marker(function)
return True
Unfortunately, neither custom fixtures nor markers will work with skipping in single steps (and you will still be skipping the whole scenario since it is an atomic test unit in gherkin). I didn't find a reliable way to befriend pytest-bdd steps with pytest stuff; looks like they are simply ignored. Nevertheless, you can easily write a custom decorator serving the same purpose:
import functools
def version_compat(spec_arg):
def deco(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
return func(*args, **kwargs)
return wrapper
return deco
Using version_compat deco in a step:
#when('I am foo')
#version_compat(">=2.0.0")
def i_am_foo():
...
Pay attention to the ordering - placing decorators outside of pytest-bdd's own stuff will not trigger them (I guess worth opening an issue, but meh).

How can I unit test a recursive functions in python?

I was wondering how can I unit test if a recursive function has been called correctly. For example this function:
def test01(number):
if(len(number) == 1):
return 1
else:
return 1+test01(number[1:])
It counts recursvely how many digits a number has (assuming the number type is string)
So, I want to test if the function test01 has been called recursively. It would be ok if it is implemented just like that, but not if it is implemented as:
def test01(number):
return len(number)
EDIT:
The recursive approach is mandatory for educational purposes, so the UnitTest process will automate programming exercises checking. Is there a way to check if the function was called more than once? If that is possible, I can have 2 tests, one asserting the correct output and one to check if the function was called more than once for the same input.
Thank you in advance for your help
Guessing by the tags I assume you want to use unittest to test for the recursive call. Here is an example for such a check:
from unittest import TestCase
import my_module
class RecursionTest(TestCase):
def setUp(self):
self.counter = 0 # counts the number of calls
def checked_fct(self, fct): # wrapper function that increases a counter on each call
def wrapped(*args, **kwargs):
self.counter += 1
return fct(*args, **kwargs)
return wrapped
def test_recursion(self):
# replace your function with the checked version
with mock.patch('my_module.test01',
self.checked_fct(my_module.test01)): # assuming test01 lives in my_module.py
result = my_module.test01('444') # call the function
self.assertEqual(result, 3) # check for the correct result
self.assertGreater(self.counter, 1) # ensure the function has been called more than once
Note: I used import my_module instead of from my_module import test01 so that the first call is also mocked - otherwise the number of calls would be one too low.
Depending on how your setup looks like, you may add further tests manually, or auto-generate the test code for each test, or use parametrization with pytest, or do something else to automate the tests.
Normally a unit test should check at least that your function works and try to test all code paths in it
Your unit test should therefore try to take the main path several times, and then find the exit path, attaining full coverage
You can use the 3rd-party coverage module to see if all your code paths are being taken
pip install coverage
python -m coverage erase # coverage is additive, so clear out old runs
python -m coverage run -m unittest discover tests/unit_tests
python -m coverage report -m # report, showing missed lines
Curtis Schlak taught me this strategy recently.
It utilizes Abstract Syntax Trees and the inspect module.
All my best,
Shawn
import unittest
import ast
import inspect
from so import test01
class Test(unittest.TestCase):
# Check to see if function calls itself recursively
def test_has_recursive_call(self):
# Boolean switch
has_recursive_call = False
# converts function into a string
src = inspect.getsource(test01)
# splits the source code into tokens
# based on the grammar
# transformed into an Abstract Syntax Tree
tree = ast.parse(src)
# walk tree
for node in ast.walk(tree):
# check for function call
# and if the func called was "test01"
if (
type(node) is ast.Call
and node.func.id == "test01"
):
# flip Boolean switch to true
has_recursive_call = True
# assert: has_recursive_call should be true
self.assertTrue(
has_recursive_call,
msg="The function does not "
"make a recursive call",
)
print("\nThe function makes a recursive call")
if __name__ == "__main__":
unittest.main()

Possible to skip a test depending on fixture value?

I have a lot of tests broken into many different files. In my conftest.py I have something like this:
#pytest.fixture(scope="session",
params=["foo", "bar", "baz"])
def special_param(request):
return request.param
While the majority of the tests work with all values, some only work with foo and baz. This means I have this in several of my tests:
def test_example(special_param):
if special_param == "bar":
pytest.skip("Test doesn't work with bar")
I find this a little ugly and was hoping for a better way to do it. Is there anyway to use the skip decorator to achieve this? If not, is it possible to right my own decorator that can do this?
You can, as one of the comments from #abarnert, write a custom decorator using functools.wraps for this purpose exactly. In my example below, I am skipping a test if a fixture configs (some configuration dictionary) value for report type is enhanced, versus standard (but it could be whatever condition you want to check).
For example, here's an example of a fixture we'll use to determine whether to skip a test or not:
#pytest.fixture
def configs()-> Dict:
return {"report_type": "enhanced", "some_other_fixture_params": 123}
Now we write a decorator that will skip the test by inspecting the fixture configs contents for its report_type key value:
def skip_if_report_enhanced(test_function: Callable) -> Callable:
#wraps(test_function)
def wrapper(*args, **kwargs):
configs = kwargs.get("configs") # configs is a fixture passed into the pytest function
report_type = configs.get("report_type", "standard")
if report_type is ReportType.ENHANCED:
return pytest.skip(f"Skipping {test_function.__name__}") # skip!
return test_function(*args, **kwargs) # otherwise, run the pytest
return wrapper # return the decorated callable pytest
Note here that I am using kwargs.get("configs") to pull the fixture out here.
Below the test itself, the logic of which is irrelevant, just that the test runs or not:
#skip_if_report_enhanced
def test_that_it_ran(configs):
print("The test ran!") # shouldn't get here if the report type is set to enhanced
The output from running this test:
============================== 1 skipped in 0.55s ==============================
Process finished with exit code 0 SKIPPED
[100%] Skipped: Skipping test_that_it_ran
One solution is to override the fixture with #pytest.mark.parametrize. For example
#pytest.mark.parametrize("special_param", ["foo"])
def test_example(special_param):
# do test
Another possibility is to not use the special_param fixture at all and explicitly use the value "foo" where needed. The downside is that this only works if there are no other fixtures that also rely on special_param.

Is it possible to make pytest report if a function is never called directly in a test?

Example
def main(p):
if foo_a(p):
return False
return p**2
def foo_a(p):
return p % 11 == 0
Now you can get 100% test coverage by
import unittest
from script import main
class Foobar(unittest.TestCase):
def test_main(self):
self.assertEquals(main(3), 9)
But maybe one wanted foo_a to be p % 2 == 0 instead.
The question
Branch coverage would shed a light on it, but I would also like to know if a function was never called "directly" by a test (such as main is in the example), but only indirectly (such as foo_a in the example).
Is this possible with pytest?
First of all just general line of thought is to unittest foo_a as well
import unittest
from script import main, foo_a
class Foobar(unittest.TestCase):
def test_main(self):
self.assertEquals(main(3), 9)
def test_foo_a(self):
self.assertEquals(foo_a(11), True)
You are probably looking for https://coverage.readthedocs.io/en/coverage-4.5.1/ which can be used with pytest https://pypi.org/project/pytest-cov/, this tool can show you exactly which lines of code had been called during testing
But I think there is another way to check your problem it is called mutation testing, here are some libraries that could help you with it
https://github.com/sixty-north/cosmic-ray
https://github.com/mutpy/mutpy
And also look into property based testing libraries like https://github.com/HypothesisWorks/hypothesis/tree/master/hypothesis-python

Categories