I'm writing a small fixture for implementing regression tests. The function under test does not contain any assert statements but produces output which is compared to a recorded output which is assumed to be correct.
This is a simplfied snippet to demonstrate what I'm doing:
#pytest.yield_fixture()
def regtest(request):
fp = cStringIO.StringIO()
yield fp
reset, full_path, id_ = _setup(request)
if reset:
_record_output(fp.getvalue(), full_path)
else:
failed = _compare_output(fp.getvalue(), full_path, request, id_)
if failed:
pytest.fail("regression test %s failed" % id_, pytrace=False)
In general my approach works works but I want to improve error reporting so that the fixture indicates the failure of a test and not the testing function itself: this implementation always prints a . because the testing function does not raise any exception, and then an extra E if pytest.fail is called in the last line.
So what I want is to supress the output of . triggered by the function under test and let my fixture code output the approriate character.
Update:
I was able to improve output, but it still I have to many "." in the output when the tests are running. It is uploaded at https://pypi.python.org/pypi/pytest-regtest
you can find the repository at https://sissource.ethz.ch/uweschmitt/pytest-regtest/tree/master
Sorry for posting links, but the files got a bit bigger now.
Solution:
I came up with a solution by implementing an hook which handles the regtest result in hook. The code is then (simplified):
#pytest.yield_fixture()
def regtest(request):
fp = cStringIO.StringIO()
yield fp
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
try:
outcome = yield
except Exception:
raise
else:
# we only handle regtest fixture if no other other exception came up during testing:
if outcome.excinfo is not None:
return
regtest = item.funcargs.get("regtest")
if regtest is not None:
_handle_regtest_result(regtest)
And _handle_regtest_result either stores the recorded values or does the appropriate checks. The plugin is now available at https://pypi.python.org/pypi/pytest-regtest
Your are mixing two things there: the fixture itself (setting up conditions for your test) and the expected behavior _compare_output(a, b). You are probably looking for something along the lines:
import pytest
#pytest.fixture()
def file_fixture():
fp = cStringIO.StringIO()
return fp.getvalue()
#pytest.fixture()
def request_fixture(request, file_fixture):
return _setup(request)
def test_regression(request_fixture, file_fixture):
reset, full_path, id_ = request_fixture
if reset:
_record_output(file_fixture, full_path)
else:
failed = _compare_output(file_fixture, full_path, request, id_)
assert failed is True, "regression test %s failed" % id_
Related
I'm trying to customize report.html of pytest using pytest-html plugin.
I searched up many sites(including pytest-html documentation) and found that the code below is commonly used.(The code is in conftest.py)
(https://pytest-html.readthedocs.io/en/latest/user_guide.html#extra-content)
#pytest.hookimpl(hookwrapper = True)
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin("html")
outcome = yield
report = outcome.get_result()
extra = getattr(report, "extra", [])
if report.outcome == "call":
#always add url to report
xfail = hasattr(report, "wasxfail")
if (report.skipped and xfail) or (report.failed and not xfail):
extra.append(pytest_html.extras.url("http://www.google.com/"))
extra.append(pytest_html.extras.text('Hi', name = 'TEXT'))
# only add additional html on failure
# extra.append(pytest_html.extras.html("<div>Additional HTML</div>"))
report.extra = extra
However, I have no idea of each lines.
No one explained what the line does actually.
Why does the script allocates yield keyword to outcome with out any variable(e.g. yield 1), and what does yield.get_result() actually do?
Also, I have no idea of xfail("wasxfail").
I found that #pytest.xfail makes the test function fail in the pytest run, but I think it has nothing to do with the above code.
Why don't we use 'fail' not 'xfail'?
Anyway, what I need is
First, the meaning of each line and what it does.
Second, I wanna set different message in the report.html depending on the pass/fail.
I tried python report.outcome == 'failed', report.outcome == 'passed' to divide conditions, but it didn't work.
Third, when adding the text not url, it becomes tag and helps redirecting the page containing the text.
However, if I click the page in the html, it opens about:blank page not the desired one.
Using right click and open in new tab redirects to the desired one.
Any help is welcomed. Thanks.
+ I have more questions, I tried
if report.passed:
extra.append(pytest_html.extras.url("https://www.google.com/")
report.extra = extra
It attaches 3 same links in the report.html(Results table) How can I handle it?
+ I could log a message when test is failed like msg = 'hi', pytest.fail(msg) However, I cannot get a clue to do it when the test is passed.
Trying to answer as many lines as possible.
Pytest uses generators to iterate over the report steps.
The function pytest_runtest_makereport iterates over every result.when (not .outcome, this is a bug in the documentation) which according to pytest are as follows: 'collect', 'setup', 'call', and 'teardown'.
The get_result is how pytest implements its hooks.
The confusion about failed and xfail (expected to fail) is how you define a test failure: It is an error if it was skipped but was expected to fail or if it failed but was not expected to fail.
The thing with the about:blank could also be a bug.
What you want to use your if statements on is not the call info but the report:
if report.failed:
do_stuff()
if report.passed:
do_stuff_different()
One way to get more info about code and its context would be to debug it using breakpoint().
So the snippet you are looking for is:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin("html")
outcome = yield
report = outcome.get_result()
extra = getattr(report, "extra", [])
if report.when == "call":
xfail = hasattr(report, "wasxfail")
if (report.skipped and xfail) or (report.failed and not xfail):
extra.append(pytest_html.extras.url("http://www.google.com/"))
if report.passed:
extra.append(pytest_html.extras.url("http://www.stackoverflow.com/"))
report.extra = extra
I have a function like below.
# in retrieve_data.py
import os
def create_output_csv_file_path_and_name(output_folder='outputs') -> str:
"""
Creates an output folder in the project root if it doesn't already exist.
Then returns the path and name of the output CSV file, which will be used
to write the data.
"""
if not os.path.exists(output_folder):
os.makedirs(output_folder)
logging.info(f"New folder created for output file: " f"{output_folder}")
return os.path.join(output_folder, 'results.csv')
I also created a unit test file like below.
# in test_retrieve_data.py
class OutputCSVFilePathAndNameCreationTest(unittest.TestCase):
#patch('path.to.retrieve_data.os.path.exists')
#patch('path.to.retrieve_data.os.makedirs')
def test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet(
self,
os_path_exists_mock,
os_makedirs_mock
):
os_path_exists_mock.return_value = False
retrieve_cradle_profile_details.create_output_csv_file_path_and_name()
os_path_exists_mock.assert_called_once()
os_makedirs_mock.assert_called_once()
But when I run the above unit test, I get the following error.
def assert_called_once(self):
"""assert that the mock was called only once.
"""
if not self.call_count == 1:
msg = ("Expected '%s' to have been called once. Called %s times.%s"
% (self._mock_name or 'mock',
self.call_count,
self._calls_repr()))
raise AssertionError(msg)
AssertionError: Expected 'makedirs' to have been called once. Called 0 times.
I tried poking around with pdb.set_trace() in create_output_csv_file_path_and_name method and I'm sure it is receiving a mocked object for os.path.exists(), but the code never go pasts that os.path.exists(output_folder) check (output_folder was already created in the program folder but I do not use it for unit testing purpose and want to keep it alone). What could I possibly be doing wrong here to mock os.path.exists() and os.makedirs()? Thank you in advance for your answers!
You have the arguments to your test function reversed. When you have stacked decorators, like:
#patch("retrieve_data.os.path.exists")
#patch("retrieve_data.os.makedirs")
def test_create_output_csv_file_path_...():
They apply bottom to top, so you need to write:
#patch("retrieve_data.os.path.exists")
#patch("retrieve_data.os.makedirs")
def test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet(
self, os_makedirs_mock, os_path_exists_mock
):
With this change, if I have this in retrieve_data.py:
import os
import logging
def create_output_csv_file_path_and_name(output_folder='outputs') -> str:
"""
Creates an output folder in the project root if it doesn't already exist.
Then returns the path and name of the output CSV file, which will be used
to write the data.
"""
if not os.path.exists(output_folder):
os.makedirs(output_folder)
logging.info(f"New folder created for output file: " f"{output_folder}")
return os.path.join(output_folder, 'results.csv')
And this is test_retrieve_data.py:
import unittest
from unittest.mock import patch
import retrieve_data
class OutputCSVFilePathAndNameCreationTest(unittest.TestCase):
#patch("retrieve_data.os.path.exists")
#patch("retrieve_data.os.makedirs")
def test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet(
self, os_makedirs_mock, os_path_exists_mock
):
os_path_exists_mock.return_value = False
retrieve_data.create_output_csv_file_path_and_name()
os_path_exists_mock.assert_called_once()
os_makedirs_mock.assert_called_once()
Then the tests run successfully:
$ python -m unittest -v
test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet (test_retrieve_data.OutputCSVFilePathAndNameCreationTest.test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
Update I wanted to leave a comment on the diagnostics I performed here, because I didn't initially spot the reversed arguments, either, but the problem became immediately apparent when I added a breakpoint() the beginning of the test and printed out the values of the mocks:
(Pdb) p os_path_exists_mock
<MagicMock name='makedirs' id='140113966613456'>
(Pdb) p os_makedirs_mock
<MagicMock name='exists' id='140113966621072'>
The fact that the names were swapped made the underlying problem easy to spot.
I'd like to run pytest and then store results and present them to users on demand (e.g. store pytest results to a db and then expose them through web service)
I could run pytest from a command line with option to save results report into file, then find and parse the file, but feels silly to have the results in a (pytest) python app, then store them to a file and then instantly look for the file, parse it back into python code for further processing. I know I can run pytest programatically via pytest.main(args) however it only return some exit code and not details about tests results - how can I retrieve the results when using pytest.main()?
I'm looking for smt like
args = # arguments
ret_code = pytest.main(args=args) # pytest.main() as is only returns trivial return code
my_own_method_to_process(pytest.results) # how to retrieve any kind of pytest.results object that would contain test execution results data (list of executed tests, pass fail info, etc as pytest is displaying into console or saves into file reports)
There are couple of similar questions but always with some deviation that doesn't work for me. I simply want to run pytest from my code and - whatever format the output would be - directly grab it and further process.
(Note I'm in a corporate environment where installing new packages (i.e. pytest plugins) is limited, so I'd like to achieve this without installing any other module/pytest plugin into my environment)
Write a small plugin that collects and stores reports for each test. Example:
import time
import pytest
class ResultsCollector:
def __init__(self):
self.reports = []
self.collected = 0
self.exitcode = 0
self.passed = 0
self.failed = 0
self.xfailed = 0
self.skipped = 0
self.total_duration = 0
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(self, item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call':
self.reports.append(report)
def pytest_collection_modifyitems(self, items):
self.collected = len(items)
def pytest_terminal_summary(self, terminalreporter, exitstatus):
print(exitstatus, dir(exitstatus))
self.exitcode = exitstatus.value
self.passed = len(terminalreporter.stats.get('passed', []))
self.failed = len(terminalreporter.stats.get('failed', []))
self.xfailed = len(terminalreporter.stats.get('xfailed', []))
self.skipped = len(terminalreporter.stats.get('skipped', []))
self.total_duration = time.time() - terminalreporter._sessionstarttime
def run():
collector = ResultsCollector()
pytest.main(plugins=[collector])
for report in collector.reports:
print('id:', report.nodeid, 'outcome:', report.outcome) # etc
print('exit code:', collector.exitcode)
print('passed:', collector.passed, 'failed:', collector.failed, 'xfailed:', collector.xfailed, 'skipped:', collector.skipped)
print('total duration:', collector.total_duration)
if __name__ == '__main__':
run()
I'm writing a module that involves parsing html for data and creating an object from it. Basically, I want to create a set of testcases where each case is an html file paired with a golden/expected pickled object file.
As I make changes to the parser, I would like to run this test suite to ensure that each html page is parsed to equal the 'golden' file (essentially a regression suite)
I can see how to code this as a single test case, where I would load all file pairs from some directory and then iterate through them. But I believe this would end up being reported as a single test case, pass or fail. But I want a report that says, for example, 45/47 pages parsed successfully.
How do I arrange this?
I've done similar things with the unittest framework by writing a function which creates and returns a test class. This function can then take in whatever parameters you want and customise the test class accordingly. You can also customise the __doc__ attribute of the test function(s) to get customised messages when running the tests.
I quickly knocked up the following example code to illustrate this. Instead of doing any actual testing, it uses the random module to fail some tests for demonstration purposes. When created, the classes are inserted into the global namespace so that a call to unittest.main() will pick them up. Depending on how you run your tests, you may wish to do something different with the generated classes.
import os
import unittest
# Generate a test class for an individual file.
def make_test(filename):
class TestClass(unittest.TestCase):
def test_file(self):
# Do the actual testing here.
# parsed = do_my_parsing(filename)
# golden = load_golden(filename)
# self.assertEquals(parsed, golden, 'Parsing failed.')
# Randomly fail some tests.
import random
if not random.randint(0, 10):
self.assertEquals(0, 1, 'Parsing failed.')
# Set the docstring so we get nice test messages.
test_file.__doc__ = 'Test parsing of %s' % filename
return TestClass
# Create a single file test.
Test1 = make_test('file1.html')
# Create several tests from a list.
for i in range(2, 5):
globals()['Test%d' % i] = make_test('file%d.html' % i)
# Create them from a directory listing.
for dirname, subdirs, filenames in os.walk('tests'):
for f in filenames:
globals()['Test%s' % f] = make_test('%s/%s' % (dirname, f))
# If this file is being run, run all the tests.
if __name__ == '__main__':
unittest.main()
A sample run:
$ python tests.py -v
Test parsing of file1.html ... ok
Test parsing of file2.html ... ok
Test parsing of file3.html ... ok
Test parsing of file4.html ... ok
Test parsing of tests/file5.html ... ok
Test parsing of tests/file6.html ... FAIL
Test parsing of tests/file7.html ... ok
Test parsing of tests/file8.html ... ok
======================================================================
FAIL: Test parsing of tests/file6.html
----------------------------------------------------------------------
Traceback (most recent call last):
File "generic.py", line 16, in test_file
self.assertEquals(0, 1, 'Parsing failed.')
AssertionError: Parsing failed.
----------------------------------------------------------------------
Ran 8 tests in 0.004s
FAILED (failures=1)
The nose testing framework supports this. http://www.somethingaboutorange.com/mrl/projects/nose/
Also see here: How to generate dynamic (parametrized) unit tests in python?
Here's what I would do (untested):
files = os.listdir("/path/to/dir")
class SomeTests(unittest.TestCase):
def _compare_files(self, file_name):
with open('/path/to/dir/%s-golden' % file_name, 'r') as golden:
with open('/path/to/dir/%s-trial' % file_name, 'r') as trial:
assert golden.read() == trial.read()
def test_generator(file_name):
def test(self):
self._compare_files(file_name):
return test
if __name__ == '__main__':
for file_name in files:
test_name = 'test_%s' % file_name
test = test_generator(file_name)
setattr(SomeTests, test_name, test)
unittest.main()
I`d like to know how I could unit-test the following module.
def download_distribution(url, tempdir):
""" Method which downloads the distribution from PyPI """
print "Attempting to download from %s" % (url,)
try:
url_handler = urllib2.urlopen(url)
distribution_contents = url_handler.read()
url_handler.close()
filename = get_file_name(url)
file_handler = open(os.path.join(tempdir, filename), "w")
file_handler.write(distribution_contents)
file_handler.close()
return True
except ValueError, IOError:
return False
Unit test propositioners will tell you that unit tests should be self contained, that is, they should not access the network or the filesystem (especially not in writing mode). Network and filesystem tests are beyond the scope of unit tests (though you might subject them to integration tests).
Speaking generally, for such a case, I'd extract the urllib and file-writing codes to separate functions (which would not be unit-tested), and inject mock-functions during unit testing.
I.e. (slightly abbreviated for better reading):
def get_web_content(url):
# Extracted code
url_handler = urllib2.urlopen(url)
content = url_handler.read()
url_handler.close()
return content
def write_to_file(content, filename, tmpdir):
# Extracted code
file_handler = open(os.path.join(tempdir, filename), "w")
file_handler.write(content)
file_handler.close()
def download_distribution(url, tempdir):
# Original code, after extractions
distribution_contents = get_web_content(url)
filename = get_file_name(url)
write_to_file(distribution_contents, filename, tmpdir)
return True
And, on the test file:
import module_I_want_to_test
def mock_web_content(url):
return """Some fake content, useful for testing"""
def mock_write_to_file(content, filename, tmpdir):
# In this case, do nothing, as we don't do filesystem meddling while unit testing
pass
module_I_want_to_test.get_web_content = mock_web_content
module_I_want_to_test.write_to_file = mock_write_to_file
class SomeTests(unittest.Testcase):
# And so on...
And then I second Daniel's suggestion, you should read some more in-depth material on unit testing.
Vague question. If you're just looking for a primer for unit testing in general with a Python slant, I recommend Mark Pilgrim's "Dive Into Python" which has a chapter on unit testing with Python. Otherwise you need to clear up what specific issues you are having testing that code.
To mock urllopen you can pre fetch some examples that you can then use in your unittests. Here's an example to get you started:
def urlopen(url):
urlclean = url[:url.find('?')] # ignore GET parameters
files = {
'http://example.com/foo.xml': 'foo.xml',
'http://example.com/bar.xml': 'bar.xml',
}
return file(files[urlclean])
yourmodule.urllib.urlopen = urlopen