mocking multi-parameter function, how to specify spec? - python

I believe my problem may be that I am not giving a spec parameter to the patch method. And you can see that autospec is not working either. Both the commented and un-commnented lines give the same result. And searching on "python mock parameter spec" does not help as "spec" is too generic a word...
I have a function I want to test:
% cat fixTextFiles.py:
import os
from unittest.mock import patch
def fixFile(tables, dir, filename):
[...]
return None
And a test that I am trying to write:
import os
import tableColumns
import unittest
import fixTextFiles
from unittest.mock import create_autospec
def test_fix_files(mocker):
rootdir = 'data_20209999/CalAccess/DATA/'
files = [rootdir + 'TEXT_MEMO.TSV', rootdir + 'SMRY.TSV']
mocker.patch('os.listdir', return_value=files)
mocker.patch('fixTextFiles.fixFile', return_value=None, autospec=True)
#mock_function = create_autospec('fixTextFiles.fixFile', return_value=None)
fixTextFiles.fixFiles(tableColumns.readTableColumns(), 'data_20209999/CalAccess/DATA', 'TEXT_MEMO.TSV')
And the result:
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-6.0.0, py-1.9.0, pluggy-0.13.1
rootdir: /Users/ray/Projects/CalAccessImpls/open_calaccess_data_py
plugins: mock-3.2.0
collected 3 items
fetchSoSData_test.py .. [ 66%]
fixTextFiles_test.py F [100%]
=================================== FAILURES ===================================
________________________________ test_fix_files ________________________________
mocker = <pytest_mock.plugin.MockFixture object at 0x109623e80>
def test_fix_files(mocker):
rootdir = 'data_20209999/CalAccess/DATA/'
files = [rootdir + 'TEXT_MEMO.TSV', rootdir + 'SMRY.TSV']
mocker.patch('os.listdir', return_value=files)
mocker.patch('fixTextFiles.fixFile', return_value=None, autospec=True)
#mock_function = create_autospec('fixTextFiles.fixFile', return_value=None)
> fixTextFiles.fixFiles(tableColumns.readTableColumns(), 'data_20209999/CalAccess/DATA', 'TEXT_MEMO.TSV')
E TypeError: fixFiles() takes 1 positional argument but 3 were given
fixTextFiles_test.py:21: TypeError
=========================== short test summary info ============================
FAILED fixTextFiles_test.py::test_fix_files - TypeError: fixFiles() takes 1 p...
========================= 1 failed, 2 passed in 0.24s ==========================

Yes, this was a really dumb example of pilot error....

Related

Parameterize List Returned From Fixture in Pytest Or Pytest-cases

I'm attempting to write a test fixture based on randomly generated data. This randomly generated data needs to be able to accept a seed so that we can generate the same data on two different computers at the same time.
I'm using the pytest parse.addoption fixture (I think it's a fixture) to add this ability.
My core issue is that I'd like to be able to parameterize a randomly generated list that uses a fixture as an argument.
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(10))
#fixture(scope=session)
def seed(pytestconfig):
return pytestconfig.getoption("seed")
#fixture(scope=session)
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize(group_item=test_context["group_items"])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
Leads to this result.
% pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{Home}/tests, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:12: in <module>
???
E TypeError: 'function' object is not subscriptable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not subscriptable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.18s ============================================================================================================================================================================
I think I might be able work around this issue by making an empty test that leaks the test_context up to the global scope but that feels really really brittle. I'm looking for another method to still be able to
Use the seed fixture to generate data
Generate one test per element in the generated list
Not depend on the order in which the tests are run.
Edit
Here's an example of this not working with straight pytest
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture
def seed():
return 1
#fixture
def test_context(seed):
return [seed, 'a', 'test', 'list']
#pytest.fixture(params=test_context)
def example_fixture(request):
return request.param
def test_reconciliation(example_fixture) -> None:
print(example_fixture)
assert False
pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{HOME}/tests/integration, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:14: in <module>
???
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1327: in fixture
fixture_marker = FixtureFunctionMarker(
<attrs generated init _pytest.fixtures.FixtureFunctionMarker>:5: in __init__
_inst_dict['params'] = __attr_converter_params(params)
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1159: in _params_converter
return tuple(params) if params is not None else None
E TypeError: 'function' object is not iterable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not iterable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.23s ======================================================================================================================================================================
I tried your code with testfile and conftest.py
conftest.py
import pytest
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
# If you add a breakpoint() here it'll never be hit.
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# This line throws an exception since seed was never added.
return pytestconfig.getoption("seed")
myso_test.py
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert True
Test Run:
PS C:\Users\AB45365\PycharmProjects\SO> pytest .\myso_test.py -s -v --seed=10
============================================================== test session starts ==============================================================
platform win32 -- Python 3.9.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- c:\users\ab45365\appdata\local\programs\python\python39\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\AB45365\PycharmProjects\SO
plugins: cases-3.6.11, lazy-fixture-0.6.3
collected 1 item
myso_test.py::test_example[group_item-test_context] PASSED
To complement Devang Sanghani's answer : as of pytest 7.1, pytest_addoption is a pytest plugin hook. So, as for all other plugin hooks, it can only be present in plugin files or in conftest.py.
See the note in https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_addoption :
This function should be implemented only in plugins or conftest.py
files situated at the tests root directory due to how pytest discovers
plugins during startup.
This issue is therefore not related to pytest-cases.
After doing some more digging I ran into this documentation around pytest-cases
from secrets import randbelow
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# return pytestconfig.getoption("seed")
return 1
#pytest.fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
This unfortunately ran me into a new problem. Looks like pytest-cases doesn't currently call pytest_addoption during the fixture execution step rihgt now. I created this ticket to cover this case but this does effectively solve my original question even if it has a caveat.

Pytest giving says module doesn't exist while script works fine otherwise

I wrote some unit tests for a function I'd created:
import numpy as np
from random import randint
from bootstrap import Bootstrap
num = randint(5, 30)
array = np.random.rand(randint(5, 10), randint(2, 10))
boot = Bootstrap(array, num)
result = boot.create_bootstrap_array()
def test_length(result):
assert len(result) == num
def test_equality(result):
x = randint(0, num-1)
assert len(result[x]) == len(result[x+1])
def test_shape(result):
x = randint(0, num)
assert result[x].shape == array.shape
The bootstrap module I'm importing is in the parent directory of the project and the directory is added to PYTHONPATH so it works fine if I just add a line at the end:
length_test(result)
At the same time when I try to run pytest from the command line I get the following error:
c:\python\rf_model\tests>pytest bootstrap_test.py
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: c:\python\rf_model\tests, inifile:
plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
collected 0 items / 1 errors
=================================== ERRORS ====================================
_____________________ ERROR collecting bootstrap_test.py ______________________
ImportError while importing test module 'c:\python\rf_model\tests\bootstrap_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
bootstrap_test.py:10: in <module>
from bootstrap import Bootstrap
E ModuleNotFoundError: No module named 'bootstrap'
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.18 seconds ===========================
I believe I correctly followed the conventions for Python test discovery when naming files and functions. What could be the issue here?

setup and teardown with py.test data driven testing

I have a lot of tests that basically do the same actions but with different data, so I wanted to implement them using pytest, I managed to do it in a typical junit way like this:
import pytest
import random
d = {1: 'Hi',2: 'How',3: 'Are',4:'You ?'}
def setup_function(function):
print("setUp",flush=True)
def teardown_function(functions):
print("tearDown",flush=True)
#pytest.mark.parametrize("test_input", [1,2,3,4])
def test_one(test_input):
print("Test with data " + str(test_input))
print(d[test_input])
assert True
Which gives me the following output
C:\Temp>pytest test_prueba.py -s
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: C:\Temp, inifile:
collected 4 items
test_prueba.py
setUp
Test with data 1
Hi
.tearDown
setUp
Test with data 2
How
.tearDown
setUp
Test with data 3
Are
.tearDown
setUp
Test with data 4
You ?
.tearDown
========================== 4 passed in 0.03 seconds ===========================
The problem now is that I would perform some actions also in the setup and teardown that I need to access to the test_input value
Is there any elegant solution for this ?
Maybe to achieve this I should use the parametrization or the setup teardown in a different way ?
If that the case can someone put an example of data driven testing with setup and teardown parametrized ?
Thanks !!!
parameterize on a test is more for just specifying raw inputs and expected outputs. If you need access to the parameter in the setup, then it's more part of a fixture than a test.
So you might like to try:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture(params=["good", "bad", "error"])
def parameterized_fixture(request):
param = request.param
yield from thing_that_uses_param(param)
def test_one(parameterized_fixture):
assert parameterized_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_one[good] PASSED [ 33%]
a.py::test_one[bad] FAILED [ 66%]
a.py::test_one[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_one[bad] ________________________________
parameterized_fixture = 'FAIL'
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:28: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_one[error] _______________________________
parameterized_fixture = None
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:28: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
However, that requires that you create a parameterized fixture for each set of parameters you might want to use with a fixture.
You could alternatively mix and match the parameterized mark and a fixture that reads those params, but that requires the test to uses specific names for the parameters. It will also need to make sure such names are unique so it won't conflict with any other fixtures trying to do the same thing. For instance:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture
def my_fixture(request):
if "my_fixture_param" not in request.funcargnames:
raise ValueError("could use a default instead here...")
param = request.getfuncargvalue("my_fixture_param")
yield from thing_that_uses_param(param)
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
assert my_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_two[good] PASSED [ 33%]
a.py::test_two[bad] FAILED [ 66%]
a.py::test_two[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_two[bad] ________________________________
my_fixture = 'FAIL', my_fixture_param = 'bad'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:25: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_two[error] _______________________________
my_fixture = None, my_fixture_param = 'error'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:25: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
I think that what you are looking for is yield fixtures,
you can make an auto_use fixture the run something before every test and after
and you can access all the test metadata (marks, parameters and etc)
you can read it
here
and the access to parameters is via function argument called request
IMO, set_up and tear_down should not access test_input values. If you want it to be that way, then probably there is some problem in your test logic.
set_up and tear_down must be independent of values used by test. However, you may use another fixture to get the task done.
I have a lot of tests that basically do the same actions but with different data
In addition to Dunes' answer relying solely on pytest, this part of your question makes me think that pytest-cases could be useful to you, too. Especially if some test data should be parametrized while other do not.
See this other post for an example, and also the documentation of course. I'm the author by the way ;)

How can I write the pytest.main messages into a string?

When I call pytest.main(...) in Python, it will display the unit test messages in the output window, such as
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: xxx, inifile: pytest.ini
plugins: cov-2.5.1, hypothesis-3.38.5
collected 1 item
..\test_example.py F [100%]
================================== FAILURES ===================================
_______________________________ test_something ________________________________
def test_something():
> assert 1 == 0
E assert 1 == 0
..\test_example.py:5: AssertionError
========================== 1 failed in 0.60 seconds ===========================
My question is simply that how can I get the message above into a string object. The documentation doesn't say anything about this. And the what pytest.main() returns is only an integer that represents the error code.
https://docs.pytest.org/en/latest/usage.html#calling-pytest-from-python-code
I don't know pytest but you can try to redirect the standard output to a file. Something like this:
import sys
sys.stdout = open("log.txt", "w")
After this you can read the file to get the strings. What do you want to do with the strings?

pytest doesn't find the test

I try to make a simple test with assert statement like this
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
! pytest
and it returns
If anyone has any ideas what might be wrong.
For pytest to find the tests these have to be in a file in the current directory or a sub-directory and the filename has to begin with test (although that's probably customizable).
So if you use (inside a Jupyter notebook):
%%file test_sth.py
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
That creates a file named test_sth.py with the given contents.
Then run ! pytest and it should work (well, fail):
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-3.3.0, py-1.5.2, pluggy-0.6.0
rootdir: D:\JupyterNotebooks, inifile:
plugins: hypothesis-3.38.5
collected 1 item
test_sth.py F [100%]
================================== FAILURES ===================================
__________________________________ test_sqr ___________________________________
def test_sqr():
> assert kvadrat(2) == 4
E NameError: name 'kvadrat' is not defined
test_sth.py:5: NameError
========================== 1 failed in 0.12 seconds ===========================

Categories