I have several test cases and test functions, and the list of test cases is different for the different functions. This can be easily done with pytest.mark.parametrize. The extra need I have is to load a resource (a file in my case) and I'd like to have this file only loaded once per test session and cached.
Below an example illustrating what I want. It's working, but I would like to find a way to use pytest fixtures or some other caching mechanism so that I don't have to do the caching myself and have the pars=load_file(pars) line in each test function.
Can someone please explain how to do this with pytest?
import pytest
case_1 = dict(label='case_1', spam=1)
case_2 = dict(label='case_2', spam=2)
case_3 = dict(label='case_3', spam=3)
_cache = {}
def load_file(pars):
if pars['label'] in _cache:
print('load_file from cache', pars)
return _cache[pars['label']]
else:
print('load_file loading', pars)
pars['file'] = pars['label'] + ' spam!'
_cache[pars['label']] = pars
return pars
#pytest.mark.parametrize('pars', [case_1, case_2])
def test_a(pars):
pars = load_file(pars)
print('test_a', pars)
#pytest.mark.parametrize('pars', [case_2, case_3])
def test_b(pars):
pars = load_file(pars)
print('test_b', pars)
#pytest.mark.parametrize('pars', [case1, case_2, case_3])
def test_c(pars):
pars = load_file(pars)
print('test_c', pars)
### more tests here for various combinations of test cases
The first and the obvious solution is to use the session-scoped fixtures. However, it requires restructuring the test file, and load all of the known files in advance.
import pytest
#pytest.fixture(scope='session')
def pars_all():
cache = {}
for case in [case_1, case_2, case_3]:
cache[case['label']] = 'case {} content'.format(case)
yield cache
# optionally destroy or unload or unlock here.
#pytest.fixture(scope='function')
def pars(request, pars_all):
label = request.param
yield pars_all[label]
#pytest.mark.parametrize('pars', ['case_1', 'case_2'], indirect=True)
def test(pars):
pass
Please note the indirect parametrisation. It means that the pars fixture will be prepared instead, getting a parameter value in request.param. The parameter name and the fixture must share the same name.
The session-scoped fixture (or module-scoped, or class-scoped if you wish) will be prepared only once for all the tests. It is important to note that the wider-scoped fixtures can be used in the more narrow-scoped or same-scoped fixtures, but not in the opposite direction.
If the cases are not that well-defined, it is the same easy, just the cache is populated on demand:
import pytest
#pytest.fixture(scope='session')
def pars_all():
yield {}
#pytest.fixture(scope='function')
def pars(request, pars_all):
label = request.param
if label not in pars_all:
print('[[[{}]]]'.format(request.param))
pars_all[label] = 'content of {}'.format(label)
yield pars_all[label]
#pytest.mark.parametrize('pars', ['case_1', 'case_2'], indirect=True)
def test_1(pars):
print(pars)
#pytest.mark.parametrize('pars', ['case_1', 'case_3'], indirect=True)
def test_2(pars):
print(pars)
Note, that the {} object is created only once, because it is session-scoped, and is shared among all tests & callspecs. So, if one fixture adds something into it, other fixtures will see it too. You can notice that on how case_1 is reused in the test_2:
$ pytest -s -v -ra test_me.py
======= test session starts ==========
...
collected 4 items
test_me.py::test_1[case_1] [[[case_1]]]
content of case_1
PASSED
test_me.py::test_1[case_2] [[[case_2]]]
content of case_2
PASSED
test_me.py::test_2[case_1] content of case_1
PASSED
test_me.py::test_2[case_3] [[[case_3]]]
content of case_3
PASSED
======== 4 passed in 0.01 seconds ==========
A simple use of #lru_cache in your file parsing function can also do the caching trick:
#lru_cache(maxsize=3)
def load_file(file_name):
""" This function loads the file and returns contents"""
print("loading file " + file_name)
return "<dummy content for " + file_name + ">"
You can also reach the same result while making the whole code a bit more readable by separating the test functions from the test cases with pytest-cases (I'm the author by the way!):
from functools import lru_cache
from pytest_cases import parametrize_with_cases
#lru_cache(maxsize=3)
def load_file(file_name):
""" This function loads the file and returns contents"""
print("loading file " + file_name)
return "<dummy content for " + file_name + ">"
def case_1():
return load_file('file1')
def case_2():
return load_file('file2')
def case_3():
return load_file('file3')
#parametrize_with_cases("pars", cases=[case_1, case_2])
def test_a(pars):
print('test_a', pars)
#parametrize_with_cases("pars", cases=[case_2, case_3])
def test_b(pars):
print('test_b', pars)
#parametrize_with_cases("pars", cases=[case_1, case_2, case_3])
def test_c(pars):
print('test_c', pars)
Yields:
loading file file1
test_a <dummy content for file1>PASSED
loading file file2
test_a <dummy content for file2>PASSED
test_b <dummy content for file2>PASSED
loading file file3
test_b <dummy content for file3>PASSED
test_c <dummy content for file1>PASSED
test_c <dummy content for file2>PASSED
test_c <dummy content for file3>PASSED
Finally note that depending on your use case you might wish to switch to a case generator by using #parametrize on a case function, that could be more readable:
from pytest_cases import parametrize
#parametrize("file_name", ["file1", "file2"])
def case_gen(file_name):
return load_file(file_name)
Also look at tags & filters, if you do not want to hardcode the cases explicitly.
Related
I use pathlib.Path.open() and pathlib.Path.unlink() in my productive code. The unittest for that work. But I use two different ways to patch(). One with #patch decorator and one with a context manager with mock.patch().
I would like to have #patch only like this.
class MyTest(unittest.TestCase):
#mock.patch('pathlib.Path.unlink')
#mock.patch('pathlib.Path.open')
def test_foobar(self, mock_open, mock_unlink):
But the real code currenty looks like this
import unittest
from unittest import mock
import pathlib
class MyTest(unittest.TestCase):
#mock.patch('pathlib.Path.unlink')
def test_foobar(self, mock_unlink):
# simulated CSV file
opener = mock.mock_open(read_data='A;B\n1;2')
with mock.patch('pathlib.Path.open', opener):
result = validate_csv(file_path=pathlib.Path('foo.csv'))
self.assertTrue(result)
Technical my problem here is that I do not know how to add my CSV content to mock_open when using the #patch decorator.
It could look like this:
class MyTest(unittest.TestCase):
#mock.patch('pathlip.Path.open')
#mock.patch('pathlib.Path.unlink')
def test_foobar(self, mymock_unlink, mymock_open):
# simulated CSV file
opener = mock.mock_open(read_data='A;B\n1;2')
# QUESTION: How do I bring 'opener' and 'mymock_open'
# together now?
result = validate_csv(file_path=pathlib.Path('foo.csv'))
self.assertTrue(result)
But the goal of my question is to improve readability and maintainability of the code. Using two decorators would reduce the indention. Choosing one way (decorators or context managers) would IMHO be easier to read.
For learning purposes
Q: How do I bring 'opener' and 'mymock_open' together now?
A: Assign side_effect and return_value of mymock_open to those of opener.
#mock.patch('pathlib.Path.open')
#mock.patch('pathlib.Path.unlink')
def test_foobar(self, mymock_unlink, mymock_open):
# simulated CSV file
opener = mock.mock_open(read_data='A;B\n1;2')
# QUESTION: How do I bring 'opener' and 'mymock_open'
# together now?
mymock_open.side_effect = opener.side_effect # +
mymock_open.return_value = opener.return_value # +
result = validate_csv(file_path=pathlib.Path('foo.csv'))
opener.assert_not_called() # +
mymock_open.assert_called_once() # +
mymock_unlink.assert_called_once() # +
self.assertTrue(result)
But this is hardly a readability improvement.
Both using decorators
#mock.patch('pathlib.Path.open', new_callable=lambda: mock.mock_open(read_data='A;B\n1;2')) # +
#mock.patch('pathlib.Path.unlink')
def test_foobar(self, mock_unlink, mock_open):
result = validate_csv(file_path=pathlib.Path('foo.csv'))
mock_open.assert_called_once() # +
mock_unlink.assert_called_once() # +
self.assertTrue(result)
Passing just mock.mock_open(read_data='A;B\n1;2') (as positional argument new) instead of new_callable=lambda: ... works too, but then #mock.patch won't pass mock_open to test_foobar.
Both using context managers
def test_foobar(self):
# simulated CSV file
opener = mock.mock_open(read_data='A;B\n1;2')
with mock.patch('pathlib.Path.unlink') as mock_unlink,\
mock.patch('pathlib.Path.open', opener) as mock_open: # +
self.assertIs(mock_open, opener) # +
result = validate_csv(file_path=pathlib.Path('foo.csv'))
mock_open.assert_called_once() # +
mock_unlink.assert_called_once() # +
self.assertTrue(result)
Notice that mock_open is the same instance as opener.
Verifying the solutions
Sample implementation of validate_csv for a minimal, reproducible example:
def validate_csv(file_path):
"""
:param pathlib.Path file_path:
:rtype: bool
"""
with file_path.open() as f:
data = f.read()
file_path.unlink()
return data == 'A;B\n1;2'
I want to perform API testing using pytest. I have API parameters setup as JSON and looping over multiple expectations for each test.
I want assert to fail test and start with another test from JSON data. But I don't see an option of looping over tests except parametrized fixture.
def test_cases(self):
params = {}
for testcase in testcases:
for parameters in testcase[2]:
params[parameters['name']] = parameters['value']
params['q']=testcase[1]
API_SERVER_URL ='URL/'
response = requests.get(API_SERVER_URL, params=params, headers='')
for expectation in expectations:
jsonpath_expr = '$.[0].'+ expectation['group_key']+ '.' + expectation['key']
expected_value = expectation['value']
actual_value = parse(jsonpath_expr).find(response.json())
# test_case(actual_value[0].value, expected_value)
assert actual_value[0].value == expected_value
After assert I want to start over with for loop with testcases. How can this be achieved?
You should definitely look into parameterising your tests, but another way is to just collect the failures yourself:
def test_cases(self):
failures = []
params = {}
for testcase in testcases:
for parameters in testcase[2]:
params[parameters['name']] = parameters['value']
params['q']=testcase[1]
API_SERVER_URL ='URL/'
response = requests.get(API_SERVER_URL, params=params, headers='')
for expectation in expectations:
jsonpath_expr = '$.[0].'+ expectation['group_key']+ '.' + expectation['key']
expected_value = expectation['value']
actual_value = parse(jsonpath_expr).find(response.json())
if actual_value[0].value != expected_value:
failures.append( (actual_value[0].value, expected_value) )
assert failures == []
So, if there are no failures, then failures will still be the empty list. otherwise you will see the contents printed out.
Disclaimer: Yes I am well aware this is a mad attempt.
Use case:
I am reading from a config file to run a test collection where each such collection comprises of set of test cases with corresponding results and a fixed setup.
Flow (for each test case):
Setup: wipe and setup database with specific test case dataset (glorified SQL file)
load expected test case results from csv
execute collections query/report
compare results.
Sounds good, except the people writing the test cases are more from a tech admin perspective, so the goal is to enable this without writing any python code.
code
Assume these functions exist.
# test_queries.py
def gather_collections(): (collection, query, config)
def gather_cases(collection): (test_case)
def load_collection_stubs(collection): None
def load_case_dataset(test_case): None
def read_case_result_csv(test_case): [csv_result]
def execute(query): [query_result]
class TestQueries(unittest.TestCase):
def setup_method(self, method):
collection = self._item.name.replace('test_', '')
load_collection_stubs(collection)
# conftest.py
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
item.cls._item = item
yield
Example Data
Collection stubs / data (setting up of environment)
-- stubs/test_setup_log.sql
DROP DATABASE IF EXISTS `test`;
CREATE DATABASE `test`;
USE test;
CREATE TABLE log (`id` int(9) NOT NULL AUTO_INCREMENT, `timestamp` datetime NOT NULL DEFAULT NOW(), `username` varchar(100) NOT NULL, `message` varchar(500));
Query to test
-- queries/count.sql
SELECT count(*) as `log_count` from test.log where username = 'unicorn';
Test case 1 input data
-- test_case_1.sql
INSERT INTO log (`id`, `timestamp`, `username`, `message`)
VALUES
(1,'2020-12-18T11:23.01Z', 'unicorn', 'user logged in'),
(2,'2020-12-18T11:23.02Z', 'halsey', 'user logged off'),
(3,'2020-12-18T11:23.04Z', 'unicorn', 'user navigated to home')
Test case 1 expected result
test_case_1.csv
log_count
2
Attempt 1
for collection, query, config in gather_collections():
test_method_name = 'test_{}'.format(collection)
LOGGER.debug("collections.{}.test - {}".format(collection, config))
cases = gather_cases(collection)
LOGGER.debug("collections.{}.cases - {}".format(collection, cases))
setattr(
TestQueries,
test_method_name,
pytest.mark.parametrize(
'case_name',
cases,
ids=cases
)(
lambda self, case_name: (
load_case_dataset(case_name),
self.assertEqual(execute(query, case_name), read_case_result_csv( case_name))
)
)
)
Attempt 2
for collection, query, config in gather_collections():
test_method_name = 'test_{}'.format(collection)
LOGGER.debug("collections.{}.test - {}".format(collection, config))
setattr(
TestQueries,
test_method_name,
lambda self, case_name: (
load_case_dataset(case_name),
self.assertEqual(execute(query, case_name), read_case_result_csv(case_name))
)
)
def pytest_generate_tests(metafunc):
collection = metafunc.function.__name__.replace('test_', '')
# FIXME logs and id setting not working
cases = gather_cases(collection)
LOGGER.info("collections.{}.pytest.cases - {}".format(collection, cases))
metafunc.parametrize(
'case_name',
cases,
ids=cases
)
So I figured it out, but it's not the most elegant solution.
Essentially you use one function and then use some of pytests hooks to change the function names for reporting.
There are numerous issues, e.g. if you don't use pytest.param to pass the parameters to parametrize then you do not have the required information available.
Also the method passed to setup_method is not aware of the actual iteration being run when its called, so I had to hack that in with the iter counter.
# test_queries.py
def gather_tests():
global TESTS
for test_collection_name in TESTS.keys():
LOGGER.debug("collections.{}.gather - {}".format(test_collection_name, TESTS[test_collection_name]))
query = path.join(SRC_DIR, TESTS[test_collection_name]['query'])
cases_dir = TESTS[test_collection_name]['cases']
result_sets = path.join(TEST_DIR, cases_dir, '*.csv')
for case_result_csv in glob.glob(result_sets):
test_case_name = path.splitext(path.basename(case_result_csv))[0]
yield test_case_name, query, test_collection_name, TESTS[test_collection_name]
class TestQueries():
iter = 0
def setup_method(self, method):
method_name = method.__name__ # or self._item.originalname
global TESTS
if method_name == 'test_scripts_reports':
_mark = next((m for m in method.pytestmark if m.name == 'parametrize' and 'collection_name' in m.args[0]), None)
if not _mark:
raise Exception('test {} missing collection_name parametrization'.format(method_name)) # nothing to do here
_args = _mark.args[0]
_params = _mark.args[1]
LOGGER.debug('setup_method: _params - {}'.format(_params))
if not _params:
raise Exception('test {} missing pytest.params'.format(method_name)) # nothing to do here
_currparams =_params[self.iter]
self.iter += 1
_argpos = [arg.strip() for arg in _args.split(',')].index('collection_name')
collection = _currparams.values[_argpos]
LOGGER.debug('collections.{}.setup_method[{}] - {}'.format(collection, self.iter, _currparams))
load_collection_stubs(collection)
#pytest.mark.parametrize(
'case_name, collection_query, collection_name, collection_config',
[pytest.param(*c, id='{}:{}'.format(c[2], c[0])) for c in gather_tests()]
)
def test_scripts_reports(self, case_name, collection_query, collection_name, collection_config):
if not path.isfile(collection_query):
pytest.skip("report query does not exist: {}".format(collection_query))
LOGGER.debug("test_scripts_reports.{}.{} - ".format(collection_name, case_name))
load_case_dataset( case_name)
assert execute(collection_query, case_name) == read_case_result_csv(case_name)
Then to make the test ids more human you can do this
# conftest.py
def pytest_collection_modifyitems(items):
# https://stackoverflow.com/questions/61317809/pytest-dynamically-generating-test-name-during-runtime
for item in items:
if item.originalname == 'test_scripts_reports':
item._nodeid = re.sub(r'::\w+::\w+\[', '[', item.nodeid)
the result with the following files:
stubs/
00-wipe-db.sql
setup-db.sql
queries/
report1.sql
collection/
report1/
case1.sql
case1.csv
case2.sql
case2.csv
# results (with setup_method firing before each test and loading the appropriate stubs as per configuration)
FAILED test_queries.py[report1:case1]
FAILED test_queries.py[report1:case2]
Brand new to this library
Here is the call stack of my mocked object
[call(),
call('test'),
call().instance('test'),
call().instance().database('test'),
call().instance().database().snapshot(),
call().instance().database().snapshot().__enter__(),
call().instance().database().snapshot().__enter__().execute_sql('SELECT * FROM users'),
call().instance().database().snapshot().__exit__(None, None, None),
call().instance().database().snapshot().__enter__().execute_sql().__iter__()]
Here is the code I have used
#mock.patch('testmodule.Client')
def test_read_with_query(self, mock_client):
mock = mock_client()
pipeline = TestPipeline()
records = pipeline | ReadFromSpanner(TEST_PROJECT_ID, TEST_INSTANCE_ID, self.database_id).with_query('SELECT * FROM users')
pipeline.run()
print mock_client.mock_calls
exit()
I want to mock this whole stack that eventually it gives me some fake data which I will provide as a return value.
The code being tested is
spanner_client = Client(self.project_id)
instance = spanner_client.instance(self.instance_id)
database = instance.database(self.database_id)
with database.snapshot() as snapshot:
results = snapshot.execute_sql(self.query)
So my requirements is that the results variable should contain the data I will provide.
How can I provide a return value to such a nested calls
Thanks
Create separate MagicMock instances for the instance, database and snapshot objects in the code under test. Use return_value to configure the return values of each method. Here is an example. I simplified the method under test to just be a free standing function called mut.
# test_module.py : the module under test
class Client:
pass
def mut(project_id, instance_id, database_id, query):
spanner_client = Client(project_id)
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)
with database.snapshot() as snapshot:
results = snapshot.execute_sql(query)
return results
# test code (pytest)
from unittest.mock import MagicMock
from unittest import mock
from test_module import mut
#mock.patch('test_module.Client')
def test_read_with_query(mock_client_class):
mock_client = MagicMock()
mock_instance = MagicMock()
mock_database = MagicMock()
mock_snapshot = MagicMock()
expected = 'fake query results'
mock_client_class.return_value = mock_client
mock_client.instance.return_value = mock_instance
mock_instance.database.return_value = mock_database
mock_database.snapshot.return_value = mock_snapshot
mock_snapshot.execute_sql.return_value = expected
mock_snapshot.__enter__.return_value = mock_snapshot
observed = mut(29, 42, 77, 'select *')
mock_client_class.assert_called_once_with(29)
mock_client.instance.assert_called_once_with(42)
mock_instance.database.assert_called_once_with(77)
mock_database.snapshot.assert_called_once_with()
mock_snapshot.__enter__.assert_called_once_with()
mock_snapshot.execute_sql.assert_called_once_with('select *')
assert observed == expected
This test is kind of portly. Consider breaking it apart by using a fixture and a before function that sets up the mocks.
Either set the value directly to your Mock instance (those enters and exit should have not seen) with:
mock.return_value.instance.return_value.database.return_value.snapshot.return_value.execute_sql.return_value = MY_MOCKED_DATA
or patch and set return_value to target method, something like:
#mock.patch('database_engine.execute_sql', return_value=MY_MOCKED_DATA)
I want to test appendRole which is called getFileAsJson to read file with open.
My problem is that I don't know which open will be next. There are many if/elif.
def appendRole(self, hosts=None, _newrole=None, newSubroles=None, undoRole=False, config_path=None):
""" Same as changeRole but keeps subroles """
if hosts is None:
hosts = ["127.0.0.1"]
if newSubroles is None:
newSubroles = {}
if config_path is None:
config_path = self.config_path
with self._lock:
default = {}
data = self.getFileAsJson(config_path, default)
...................
...................
...................
...................
data1 = self.getFileAsJson(self.config_path_all, {"some"})
data2 = self.getFileAsJson(self.config_path_core, {"something"})
...................
...................
...................
def getFileAsJson(self, config_path, init_value):
"""
read file and return json data
if it wasn't create. Will created.
"""
self.createFile(config_path, init_value)
try:
with open(config_path, "r") as json_data:
data = json.load(json_data)
return data
except Exception as e:
self.logAndRaiseValueError(
"Can't read data from %s because %s" % (config_path, e))
Even you can find an answer to your question at Python mock builtin 'open' in a class using two different files I would like encourage you to change your approach to write tests for getFileAsJson() and then trust it.
To test appendRole() use mock.patch to patch getFileAsJson(), then by side_effect attribute you can instruct the mock to return exactly what you need for your test.
So, after some test on getFileAsJson() where you can use mock_open() to mock open builtin (maybe you need to patch createFile() too). Your appendRole()'s test looks like something like this:
#mock.patch('mymodule.getFileAsJson', autospec=True)
def test_appendRole(self, mock_getFileAsJson)
mock_getFileAsJson.side_effect = [m_data, m_data1,m_data2,...]
# where m_data, m_data1,m_data2, ... is what is supposed
# getFileAsJson return in your test
# Invoke appendRole() to test it
appendRole(bla, bla)
# Now you can use mock_getFileAsJson.assert* family methods to
# check how your appendRole call it.
# Moreover add what you need to test in appendRole()