py.test reporting on skipped tests - python

I am using py.test -rfs to get additional reports on failed and skipped tests.
There are two ways by which a test gets skipped:
by calling pytest.skip(msg) from inside of a test case
by decorating a test case: #pytest.mark.skipif(condition, msg)
In the final report, if I use pytest.skip() I get lines int the following form:
path/testmodule:linenumber: message`
The name of the test case (which is not there) would be great but this way it's certainly good enough.
But when I use pytest.mark.skipif(), what I get is this:
SKIP [1] /usr/lib/python2.7/dist-packages/_pytest/skipping.py:132: message
Not even the test module name. The path to always the same line from inside py.test ifself is not really helpful.
I certainly could put the name of the test case into the message but if there is a more elegant way of getting the report better, it would be great. Does anyone know of that?

Instead of -rfs, you should use -v:
$ py.test -v t.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.8 -- py-1.4.23 -- pytest-2.6.0 -- /usr/bin/python
collected 2 items
t.py#3::test_1 SKIPPED
t.py#7::test_2 SKIPPED
========================== 2 skipped in 0.01 seconds ==========================
The above output comes from the following file:
import pytest
def test_1():
pytest.skip('msg')
#pytest.mark.skipif(True, reason='msg')
def test_2():
pass

Related

How to access captured stdout/stderr in pytest outside of test function, during collection?

I have the following particular question about pytest. During collection, stdout is filled with a lot of useless, maybe sensitive information (in the following example abstracted by print(1)). Therefore, I don't want this information to show up in pytest's "Captured stdout" output, in case an error happens during collection (in the following example a ZeroDivisionError). However, there is also useful information in stdout (in the following example abstracted by print(2)). So my idea was to just access stdout in between.
Inside a test function, I can use the capsys fixture to access the captured stdout as described here in the pytest docs. What can I use in my case outside a test function, during collection?
Example test.py, even without any actual test functions:
print(1)
# access stdout here
print(2)
1/0 # arbitrary error
Call to pytest:
$ python3 -m pytest test.py
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: /home/user/Documents
collected 0 items / 1 errors
==================================== ERRORS ====================================
___________________________ ERROR collecting test.py ___________________________
test.py:4: in <module>
1/0
E ZeroDivisionError: division by zero
------------------------------- Captured stdout --------------------------------
1
2
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.07 seconds ============================
As you can see, both 1 and 2 are captured. I want to clear stdout in between, so only 2 will show up. I mean, the captured stdout has to be stored by pytest somewhere, right? I just don't know how to access it, to clear it.

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

Change the name of py.test tests

I'm interested in migrating my test suite from using nose to using py.test. I don't like the test names printed by py.test in the test failure summary, because they don't include the file name of the test. I do really like the test names that py.test uses for to print progress in verbose mode.
For example:
[dbw tests]$ py.test -sv r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary
==================================== test session starts ===============================
collected 3 items
r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary FAILED
========================================= FAILURES =====================================
_______________________________ _TestPanel.testDisplaySummary __________________________
This is pretty important to me because I have ~40K tests, and the short name "_TestPanel.testDisplaySummary" is not helpful to me in quickly finding the test I want. I assume that there is a built-in py.test hook that will do this, but I haven't found it yet.
The method summary_failures() of _pytest.terminal.TerminalReporter is the one that prints that line (I found it by searching for the string "FAILURES"). It uses _getfailureheadline() to get the test name (and to discard the file name and line number).
I'd suggest subclassing TerminalReporter and override _getfailureheadline().
For example:
def _getfailureheadline(self, rep):
if hasattr(rep, 'location'):
fspath, lineno, domain = rep.location
return '::'.join((fspath, domain))
else:
return super()._getfailureheadline(rep)
Produces the following output:
test.py::test_x FAILED
================ FAILURES ================
____________ test.py::test_x _____________
You can override the default reporter with your own by writing a new plugin with the following:
class MyOwnReporter(TerminalReporter):
...
def pytest_configure(config):
reporter = MyOwnReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter')

How can I repeat each test multiple times in a py.test run?

I want to run each selected py.test item an arbitrary number of times, sequentially.
I don't see any standard py.test mechanism for doing this.
I attempted to do this in the pytest_collection_modifyitems() hook. I modified the list of items passed in, to specify each item more than once. The first execution of a test item works as expected, but that seems to cause some problems for my code.
Further, I would prefer to have a unique test item object for each run, as I use id (item) as a key in various reporting code. Unfortunately, I can't find any py.test code to duplicate a test item, copy.copy() doesn't work, and copy.deepcopy() gets an exception.
Can anybody suggest a strategy for executing a test multiple times?
One possible strategy is parameterizing the test in question, but not explicitly using the parameter.
For example:
#pytest.mark.parametrize('execution_number', range(5))
def run_multiple_times(execution_number):
assert True
The above test should run five times.
Check out the parametrization documentation: https://pytest.org/latest/parametrize.html
The pytest module pytest-repeat exists for this purpose, and I recommend using modules where possible, rather than re-implementing their functionality yourself.
To use it simply add pytest-repeat to your requirements.txt or pip install pytest-repeat, then execute your tests with --count n.
In order to run each test a number of times, we will programmatically parameterize each test as the tests are being generated.
First, let's add the parser option (include the following in one of your conftest.py's):
def pytest_addoption(parser):
parser.addoption('--repeat', action='store',
help='Number of times to repeat each test')
Now we add a "pytest_generate_tests" hook. Here is where the magic happens.
def pytest_generate_tests(metafunc):
if metafunc.config.option.repeat is not None:
count = int(metafunc.config.option.repeat)
# We're going to duplicate these tests by parametrizing them,
# which requires that each test has a fixture to accept the parameter.
# We can add a new fixture like so:
metafunc.fixturenames.append('tmp_ct')
# Now we parametrize. This is what happens when we do e.g.,
# #pytest.mark.parametrize('tmp_ct', range(count))
# def test_foo(): pass
metafunc.parametrize('tmp_ct', range(count))
Running without the repeat flag:
(env) $ py.test test.py -vv
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 2 items
test.py:4: test_1 PASSED
test.py:8: test_2 PASSED
=========================== 2 passed in 0.01 seconds ===========================
Running with the repeat flag:
(env) $ py.test test.py -vv --repeat 3
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 6 items
test.py:4: test_1[0] PASSED
test.py:4: test_1[1] PASSED
test.py:4: test_1[2] PASSED
test.py:8: test_2[0] PASSED
test.py:8: test_2[1] PASSED
test.py:8: test_2[2] PASSED
=========================== 6 passed in 0.01 seconds ===========================
Further reading:
https://pytest.org/latest/plugins.html#well-specified-hooks
https://pytest.org/latest/example/parametrize.html
Based on Frank T's suggestion, I found a very simple solution in the pytest_generate_tests() callout:
parser.addoption ('--count', default=1, type='int', metavar='count', help='Run each test the specified number of times')
def pytest_generate_tests (metafunc):
for i in range (metafunc.config.option.count):
metafunc.addcall()
Now executing py.test --count 5 causes each test to be executed five times in the test session.
And it requires no changes to any of our existing tests.
While pytest-repeat (the most popular answer) doesn't work for unittest class tests, pytest-flakefinder does:
pip install pytest-flakefinder
pytest --flake-finder --flake-runs=5 tests...
Before finding test-flakefinder, I wrote a little hack of a script that does a similar thing. You can find it here. The top of the script includes instructions to how it can be run.
Based on what I've seen here, and given that I already do some filtering of tests in pytest_collection_modifyitems, my method of choice is the following. In conftest.py
def pytest_addoption(parser):
parser.addoption('--count', default=1, type=int, metavar='count', help='Run each test the specified number of times')
def pytest_collection_modifyitems(session, config, items):
count = config.option.count
items[:] = items * count # add each test multiple times
I've been looking for a simple solution for a long time. I just had to run one test n-number of times. He shouldn't have fallen once. I solved this problem as simply and stupidly as possible, but it worked for me.
def run_multiple_times(number):
i = 0
while i < number:
#test
i += 1
run_multiple_times(5)

Does pytest support "default" markers?

I am using pytest to test python models for embedded systems. Features to be tested vary by platform. ( I'm using 'platform' in this context to mean an embedded system type, not an OS type).
The most straightforward way to organize my tests would be to allocate them to directories based on platform type.
/platform1
/platform2
/etc.
pytest /platform1
This quickly became hard to support as many features overlap across platforms. I've since moved my tests into a single directory, with tests for each functional area assigned to a single filename (test_functionalityA.py, for example).
I then use pytest markers to indicate which tests within a file apply to a given platform.
#pytest.mark.all_platforms
def test_some_functionalityA1():
...
#pytest.mark.platform1
#pytest.mark.platform2
def test_some_functionlityA2():
...
While I would love to get 'conftest' to automatically detect the platform type and only run the appropriate tests, I've resigned myself to specifying which tests to run on the command line.
pytest -m "(platform1 or all_platforms)"
The Question: (finally!)
Is there a way to simplify things and have pytest run all unmarked tests by default and additionally all tests passed via '-m' on the command-line?
For example:
pytest -m "platform1"
would run tests marked #pytest.mark.platform1 as well as all tests marked #pytest.mark.all_platforms or even all tests with no #pytest.mark at all?
Given the large amount of shared functionality, being able to drop the #pytest.mark.all_platforms line would be a big help.
Let's tackle the full problem. I think you can put a conftest.py file along with your tests and it will take care to skip all non-matching tests (non-marked tests will always match and thus never get skipped). Here i am using sys.platform but i am sure you have a different way to compute your platform value.
# content of conftest.py
#
import sys
import pytest
ALL = set("osx linux2 win32".split())
def pytest_runtest_setup(item):
if isinstance(item, item.Function):
plat = sys.platform
if not hasattr(item.obj, plat):
if ALL.intersection(set(item.obj.__dict__)):
pytest.skip("cannot run on platform %s" %(plat))
With this you can mark your tests like this::
# content of test_plat.py
import pytest
#pytest.mark.osx
def test_if_apple_is_evil():
pass
#pytest.mark.linux2
def test_if_linux_works():
pass
#pytest.mark.win32
def test_if_win32_crashes():
pass
def test_runs_everywhere_yay():
pass
and if you run with::
$ py.test -rs
then you can run it and will see at least two test skipped and always
at least one test executed::
then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
collecting ... collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /home/hpk/tmp/doc-exec-222/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
Note that if you specify a platform via the marker-command line option like this::
$ py.test -m linux2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
collecting ... collected 4 items
test_plat.py .
=================== 3 tests deselected by "-m 'linux2'" ====================
================== 1 passed, 3 deselected in 0.01 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
Late to the party, but I just solved a similar problem by adding a default marker to all unmarked tests.
As a direct answer to the Question: you can have unmarked tests always run, and include marked test only as specified via the -m option, by adding the following to the conftest.py
def pytest_collection_modifyitems(items, config):
# add `always_run` marker to all unmarked items
for item in items:
if not any(item.iter_markers()):
item.add_marker("always_run")
# Ensure the `always_run` marker is always selected for
markexpr = config.getoption("markexpr", 'False')
config.option.markexpr = f"always_run or ({markexpr})"

Categories