Change the name of py.test tests - python

I'm interested in migrating my test suite from using nose to using py.test. I don't like the test names printed by py.test in the test failure summary, because they don't include the file name of the test. I do really like the test names that py.test uses for to print progress in verbose mode.
For example:
[dbw tests]$ py.test -sv r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary
==================================== test session starts ===============================
collected 3 items
r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary FAILED
========================================= FAILURES =====================================
_______________________________ _TestPanel.testDisplaySummary __________________________
This is pretty important to me because I have ~40K tests, and the short name "_TestPanel.testDisplaySummary" is not helpful to me in quickly finding the test I want. I assume that there is a built-in py.test hook that will do this, but I haven't found it yet.

The method summary_failures() of _pytest.terminal.TerminalReporter is the one that prints that line (I found it by searching for the string "FAILURES"). It uses _getfailureheadline() to get the test name (and to discard the file name and line number).
I'd suggest subclassing TerminalReporter and override _getfailureheadline().
For example:
def _getfailureheadline(self, rep):
if hasattr(rep, 'location'):
fspath, lineno, domain = rep.location
return '::'.join((fspath, domain))
else:
return super()._getfailureheadline(rep)
Produces the following output:
test.py::test_x FAILED
================ FAILURES ================
____________ test.py::test_x _____________
You can override the default reporter with your own by writing a new plugin with the following:
class MyOwnReporter(TerminalReporter):
...
def pytest_configure(config):
reporter = MyOwnReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter')

Related

Pytest capture stdout of a certain test

Is there a way get the Captured stdout call just for a specific test without failing the test?
So lets say I have 10 tests, plus a test_summary. test_summary really just prints some kind of summary/statistics of the tests, but in order for me to get that output/printout, I have to currently fail that test intentionally. Of course this test_summary run last using pytest-ordering. But is there a better way to get that results without failing the test? Or should it not be in a test, but more in the conftest.py or something? Please advice on the best practice and how I can get this summary/results (basically a printout from a specific script I wrote)
First, to answer your exact question:
Is there a way get the Captured stdout call just for a specific test without failing the test?
You can add a custom section that mimics Captured stdout call and is printed on test success. The output captured in each test is stored in the related TestReport object and is accessed via report.capstdout. Example impl: add the following code to a conftest.py in your project or tests root directory:
import os
def pytest_terminal_summary(terminalreporter, exitstatus, config):
# on failures, don't add "Captured stdout call" as pytest does that already
# otherwise, the section "Captured stdout call" will be added twice
if exitstatus > 0:
return
# get all reports
reports = terminalreporter.getreports('')
# combine captured stdout of reports for tests named `<smth>::test_summary`
content = os.linesep.join(
report.capstdout for report in reports
if report.capstdout and report.nodeid.endswith("test_summary")
)
# add custom section that mimics pytest's one
if content:
terminalreporter.ensure_newline()
terminalreporter.section(
'Captured stdout call',
sep='-',
blue=True,
bold=True,
)
terminalreporter.line(content)
This will add a custom section Captured stdout call that will only print the output captured for the test whose ID ends with test_summary (if you have multiple test functions named test_summary, extend the check). To distinct both sections, the custom one has a blue header; if you want it to match the original, remove color setting via blue=True arg.
Now, to address your actual problem:
test_summary really just prints some kind of summary/statistics of the tests
Using a test for custom reporting smells a lot like a workaround to me; why not collect the data in the tests and add a custom section printing that data afterwards? To collect the data, you can e.g. use the record_property fixture:
def test_foo(record_property):
# records a key-value pair
record_property("hello", "world")
def test_bar(record_property):
record_property("spam", "eggs")
To collect and output the custom properties recorded, slightly alter the above hookimpl. The data stored via record_property is accessible via report.user_properties:
import os
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(
f'{key}: {value}' for report in reports
for key, value in report.user_properties
)
if content:
terminalreporter.ensure_newline()
terminalreporter.section(
'My custom summary',
sep='-',
blue=True,
bold=True
)
terminalreporter.line(content)
Running the above tests now yields:
$ pytest test_spam.py
=============================== test session starts ================================
platform linux -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/oleg.hoefling/projects/private/stackoverflow/so-64812992
plugins: metadata-1.10.0, json-report-1.2.4, cov-2.10.1, forked-1.3.0, xdist-2.1.0
collected 2 items
test_spam.py .. [100%]
-------------------------------- My custom summary ---------------------------------
hello: world
spam: eggs
================================ 2 passed in 0.01s =================================
You can use the standard pytest terminal reporter.
def test_a(request):
reporter = request.config.pluginmanager.getplugin("terminalreporter")
reporter.write_line("Hello", yellow=True)
reporter.write_line("World", red=True)
The reporter is the standard plugin available for all pytest versions that I can recall. It is not documented unfortunately while the API is pretty stable.
You can find TerminalReporter class on pytest's github: https://github.com/pytest-dev/pytest/blob/master/src/_pytest/terminal.py
The most useful for you methods are probably ensure_line(), write(), flush() and the absolute champion -- write_line() that just do all work.
Available styles could be found at https://github.com/pytest-dev/pytest/blob/master/src/_pytest/_io/terminalwriter.py
_esctable = dict(
black=30,
red=31,
green=32,
yellow=33,
blue=34,
purple=35,
cyan=36,
white=37,
Black=40,
Red=41,
Green=42,
Yellow=43,
Blue=44,
Purple=45,
Cyan=46,
White=47,
bold=1,
light=2,
blink=5,
invert=7,
)
Lowercased styles are for the foreground, capitalized are for background.
For example, the yellow on the green text should be printed as
reporter.write_line("Text", yellow=True, Green=True)
I hope this instruction can help.

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

Count subtests in Python unittests separately

Since version 3.4, Python supports a simple subtest syntax when writing unittests. A simple example could look like this:
import unittest
class NumbersTest(unittest.TestCase):
def test_successful(self):
"""A test with subtests that will all succeed."""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i, i)
if __name__ == '__main__':
unittest.main()
When running the tests, the output will be
python3 test_foo.py --verbose
test_successful (__main__.NumbersTest)
A test with subtests that will all succeed. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
However, in my real world use cases, the subtests will depend on a more complex iterable and check something which is very different for each subtest. Consequently, I would rather have each subtest counted and listed as a separated test case in the output (Ran 6 tests in ... in this example) to get the full picture.
Is this somehow possible with the plain unittest module in Python? The nose test generator feature would output each test separately but I would like to stay compatible with the standard library if possible.
You could subclass unittest.TestResult:
class NumbersTestResult(unittest.TestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1
Then in NumbersTest override the run function:
def run(self, test_result=None):
return super(NumbersTest, self).run(NumbersTestResult())
Sorry I cannot test this in a fully working environment right now, but this should do the trick.
Using python 3.5.2, themiurge's answer didn't work out-of-the-box for me but a little tweaking got it to do what I wanted.
I had to specifically get the test runner to use this new class as follows:
if __name__ == '__main__':
unittest.main(testRunner=unittest.TextTestRunner(resultclass=NumbersTestResult))
However this didn't print the details of the test failures to the console as in the default case. To restore this behaviour I had to change the class NumbersTestResult inherited from to unittest.TextTestResult.
class NumbersTestResult(unittest.TextTestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1

What is the correct way to report an error in a Python unittest in the setUp method?

I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)

py.test reporting on skipped tests

I am using py.test -rfs to get additional reports on failed and skipped tests.
There are two ways by which a test gets skipped:
by calling pytest.skip(msg) from inside of a test case
by decorating a test case: #pytest.mark.skipif(condition, msg)
In the final report, if I use pytest.skip() I get lines int the following form:
path/testmodule:linenumber: message`
The name of the test case (which is not there) would be great but this way it's certainly good enough.
But when I use pytest.mark.skipif(), what I get is this:
SKIP [1] /usr/lib/python2.7/dist-packages/_pytest/skipping.py:132: message
Not even the test module name. The path to always the same line from inside py.test ifself is not really helpful.
I certainly could put the name of the test case into the message but if there is a more elegant way of getting the report better, it would be great. Does anyone know of that?
Instead of -rfs, you should use -v:
$ py.test -v t.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.8 -- py-1.4.23 -- pytest-2.6.0 -- /usr/bin/python
collected 2 items
t.py#3::test_1 SKIPPED
t.py#7::test_2 SKIPPED
========================== 2 skipped in 0.01 seconds ==========================
The above output comes from the following file:
import pytest
def test_1():
pytest.skip('msg')
#pytest.mark.skipif(True, reason='msg')
def test_2():
pass

Categories