Pytest capture stdout of a certain test - python

Is there a way get the Captured stdout call just for a specific test without failing the test?
So lets say I have 10 tests, plus a test_summary. test_summary really just prints some kind of summary/statistics of the tests, but in order for me to get that output/printout, I have to currently fail that test intentionally. Of course this test_summary run last using pytest-ordering. But is there a better way to get that results without failing the test? Or should it not be in a test, but more in the conftest.py or something? Please advice on the best practice and how I can get this summary/results (basically a printout from a specific script I wrote)

First, to answer your exact question:
Is there a way get the Captured stdout call just for a specific test without failing the test?
You can add a custom section that mimics Captured stdout call and is printed on test success. The output captured in each test is stored in the related TestReport object and is accessed via report.capstdout. Example impl: add the following code to a conftest.py in your project or tests root directory:
import os
def pytest_terminal_summary(terminalreporter, exitstatus, config):
# on failures, don't add "Captured stdout call" as pytest does that already
# otherwise, the section "Captured stdout call" will be added twice
if exitstatus > 0:
return
# get all reports
reports = terminalreporter.getreports('')
# combine captured stdout of reports for tests named `<smth>::test_summary`
content = os.linesep.join(
report.capstdout for report in reports
if report.capstdout and report.nodeid.endswith("test_summary")
)
# add custom section that mimics pytest's one
if content:
terminalreporter.ensure_newline()
terminalreporter.section(
'Captured stdout call',
sep='-',
blue=True,
bold=True,
)
terminalreporter.line(content)
This will add a custom section Captured stdout call that will only print the output captured for the test whose ID ends with test_summary (if you have multiple test functions named test_summary, extend the check). To distinct both sections, the custom one has a blue header; if you want it to match the original, remove color setting via blue=True arg.
Now, to address your actual problem:
test_summary really just prints some kind of summary/statistics of the tests
Using a test for custom reporting smells a lot like a workaround to me; why not collect the data in the tests and add a custom section printing that data afterwards? To collect the data, you can e.g. use the record_property fixture:
def test_foo(record_property):
# records a key-value pair
record_property("hello", "world")
def test_bar(record_property):
record_property("spam", "eggs")
To collect and output the custom properties recorded, slightly alter the above hookimpl. The data stored via record_property is accessible via report.user_properties:
import os
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(
f'{key}: {value}' for report in reports
for key, value in report.user_properties
)
if content:
terminalreporter.ensure_newline()
terminalreporter.section(
'My custom summary',
sep='-',
blue=True,
bold=True
)
terminalreporter.line(content)
Running the above tests now yields:
$ pytest test_spam.py
=============================== test session starts ================================
platform linux -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/oleg.hoefling/projects/private/stackoverflow/so-64812992
plugins: metadata-1.10.0, json-report-1.2.4, cov-2.10.1, forked-1.3.0, xdist-2.1.0
collected 2 items
test_spam.py .. [100%]
-------------------------------- My custom summary ---------------------------------
hello: world
spam: eggs
================================ 2 passed in 0.01s =================================

You can use the standard pytest terminal reporter.
def test_a(request):
reporter = request.config.pluginmanager.getplugin("terminalreporter")
reporter.write_line("Hello", yellow=True)
reporter.write_line("World", red=True)
The reporter is the standard plugin available for all pytest versions that I can recall. It is not documented unfortunately while the API is pretty stable.
You can find TerminalReporter class on pytest's github: https://github.com/pytest-dev/pytest/blob/master/src/_pytest/terminal.py
The most useful for you methods are probably ensure_line(), write(), flush() and the absolute champion -- write_line() that just do all work.
Available styles could be found at https://github.com/pytest-dev/pytest/blob/master/src/_pytest/_io/terminalwriter.py
_esctable = dict(
black=30,
red=31,
green=32,
yellow=33,
blue=34,
purple=35,
cyan=36,
white=37,
Black=40,
Red=41,
Green=42,
Yellow=43,
Blue=44,
Purple=45,
Cyan=46,
White=47,
bold=1,
light=2,
blink=5,
invert=7,
)
Lowercased styles are for the foreground, capitalized are for background.
For example, the yellow on the green text should be printed as
reporter.write_line("Text", yellow=True, Green=True)
I hope this instruction can help.

Related

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

Change the name of py.test tests

I'm interested in migrating my test suite from using nose to using py.test. I don't like the test names printed by py.test in the test failure summary, because they don't include the file name of the test. I do really like the test names that py.test uses for to print progress in verbose mode.
For example:
[dbw tests]$ py.test -sv r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary
==================================== test session starts ===============================
collected 3 items
r_group/test_meta_analysis/test_meta_analysis.py::_TestPanel::testDisplaySummary FAILED
========================================= FAILURES =====================================
_______________________________ _TestPanel.testDisplaySummary __________________________
This is pretty important to me because I have ~40K tests, and the short name "_TestPanel.testDisplaySummary" is not helpful to me in quickly finding the test I want. I assume that there is a built-in py.test hook that will do this, but I haven't found it yet.
The method summary_failures() of _pytest.terminal.TerminalReporter is the one that prints that line (I found it by searching for the string "FAILURES"). It uses _getfailureheadline() to get the test name (and to discard the file name and line number).
I'd suggest subclassing TerminalReporter and override _getfailureheadline().
For example:
def _getfailureheadline(self, rep):
if hasattr(rep, 'location'):
fspath, lineno, domain = rep.location
return '::'.join((fspath, domain))
else:
return super()._getfailureheadline(rep)
Produces the following output:
test.py::test_x FAILED
================ FAILURES ================
____________ test.py::test_x _____________
You can override the default reporter with your own by writing a new plugin with the following:
class MyOwnReporter(TerminalReporter):
...
def pytest_configure(config):
reporter = MyOwnReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter')

How can I repeat each test multiple times in a py.test run?

I want to run each selected py.test item an arbitrary number of times, sequentially.
I don't see any standard py.test mechanism for doing this.
I attempted to do this in the pytest_collection_modifyitems() hook. I modified the list of items passed in, to specify each item more than once. The first execution of a test item works as expected, but that seems to cause some problems for my code.
Further, I would prefer to have a unique test item object for each run, as I use id (item) as a key in various reporting code. Unfortunately, I can't find any py.test code to duplicate a test item, copy.copy() doesn't work, and copy.deepcopy() gets an exception.
Can anybody suggest a strategy for executing a test multiple times?
One possible strategy is parameterizing the test in question, but not explicitly using the parameter.
For example:
#pytest.mark.parametrize('execution_number', range(5))
def run_multiple_times(execution_number):
assert True
The above test should run five times.
Check out the parametrization documentation: https://pytest.org/latest/parametrize.html
The pytest module pytest-repeat exists for this purpose, and I recommend using modules where possible, rather than re-implementing their functionality yourself.
To use it simply add pytest-repeat to your requirements.txt or pip install pytest-repeat, then execute your tests with --count n.
In order to run each test a number of times, we will programmatically parameterize each test as the tests are being generated.
First, let's add the parser option (include the following in one of your conftest.py's):
def pytest_addoption(parser):
parser.addoption('--repeat', action='store',
help='Number of times to repeat each test')
Now we add a "pytest_generate_tests" hook. Here is where the magic happens.
def pytest_generate_tests(metafunc):
if metafunc.config.option.repeat is not None:
count = int(metafunc.config.option.repeat)
# We're going to duplicate these tests by parametrizing them,
# which requires that each test has a fixture to accept the parameter.
# We can add a new fixture like so:
metafunc.fixturenames.append('tmp_ct')
# Now we parametrize. This is what happens when we do e.g.,
# #pytest.mark.parametrize('tmp_ct', range(count))
# def test_foo(): pass
metafunc.parametrize('tmp_ct', range(count))
Running without the repeat flag:
(env) $ py.test test.py -vv
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 2 items
test.py:4: test_1 PASSED
test.py:8: test_2 PASSED
=========================== 2 passed in 0.01 seconds ===========================
Running with the repeat flag:
(env) $ py.test test.py -vv --repeat 3
============================= test session starts ==============================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 6 items
test.py:4: test_1[0] PASSED
test.py:4: test_1[1] PASSED
test.py:4: test_1[2] PASSED
test.py:8: test_2[0] PASSED
test.py:8: test_2[1] PASSED
test.py:8: test_2[2] PASSED
=========================== 6 passed in 0.01 seconds ===========================
Further reading:
https://pytest.org/latest/plugins.html#well-specified-hooks
https://pytest.org/latest/example/parametrize.html
Based on Frank T's suggestion, I found a very simple solution in the pytest_generate_tests() callout:
parser.addoption ('--count', default=1, type='int', metavar='count', help='Run each test the specified number of times')
def pytest_generate_tests (metafunc):
for i in range (metafunc.config.option.count):
metafunc.addcall()
Now executing py.test --count 5 causes each test to be executed five times in the test session.
And it requires no changes to any of our existing tests.
While pytest-repeat (the most popular answer) doesn't work for unittest class tests, pytest-flakefinder does:
pip install pytest-flakefinder
pytest --flake-finder --flake-runs=5 tests...
Before finding test-flakefinder, I wrote a little hack of a script that does a similar thing. You can find it here. The top of the script includes instructions to how it can be run.
Based on what I've seen here, and given that I already do some filtering of tests in pytest_collection_modifyitems, my method of choice is the following. In conftest.py
def pytest_addoption(parser):
parser.addoption('--count', default=1, type=int, metavar='count', help='Run each test the specified number of times')
def pytest_collection_modifyitems(session, config, items):
count = config.option.count
items[:] = items * count # add each test multiple times
I've been looking for a simple solution for a long time. I just had to run one test n-number of times. He shouldn't have fallen once. I solved this problem as simply and stupidly as possible, but it worked for me.
def run_multiple_times(number):
i = 0
while i < number:
#test
i += 1
run_multiple_times(5)

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Does pytest support "default" markers?

I am using pytest to test python models for embedded systems. Features to be tested vary by platform. ( I'm using 'platform' in this context to mean an embedded system type, not an OS type).
The most straightforward way to organize my tests would be to allocate them to directories based on platform type.
/platform1
/platform2
/etc.
pytest /platform1
This quickly became hard to support as many features overlap across platforms. I've since moved my tests into a single directory, with tests for each functional area assigned to a single filename (test_functionalityA.py, for example).
I then use pytest markers to indicate which tests within a file apply to a given platform.
#pytest.mark.all_platforms
def test_some_functionalityA1():
...
#pytest.mark.platform1
#pytest.mark.platform2
def test_some_functionlityA2():
...
While I would love to get 'conftest' to automatically detect the platform type and only run the appropriate tests, I've resigned myself to specifying which tests to run on the command line.
pytest -m "(platform1 or all_platforms)"
The Question: (finally!)
Is there a way to simplify things and have pytest run all unmarked tests by default and additionally all tests passed via '-m' on the command-line?
For example:
pytest -m "platform1"
would run tests marked #pytest.mark.platform1 as well as all tests marked #pytest.mark.all_platforms or even all tests with no #pytest.mark at all?
Given the large amount of shared functionality, being able to drop the #pytest.mark.all_platforms line would be a big help.
Let's tackle the full problem. I think you can put a conftest.py file along with your tests and it will take care to skip all non-matching tests (non-marked tests will always match and thus never get skipped). Here i am using sys.platform but i am sure you have a different way to compute your platform value.
# content of conftest.py
#
import sys
import pytest
ALL = set("osx linux2 win32".split())
def pytest_runtest_setup(item):
if isinstance(item, item.Function):
plat = sys.platform
if not hasattr(item.obj, plat):
if ALL.intersection(set(item.obj.__dict__)):
pytest.skip("cannot run on platform %s" %(plat))
With this you can mark your tests like this::
# content of test_plat.py
import pytest
#pytest.mark.osx
def test_if_apple_is_evil():
pass
#pytest.mark.linux2
def test_if_linux_works():
pass
#pytest.mark.win32
def test_if_win32_crashes():
pass
def test_runs_everywhere_yay():
pass
and if you run with::
$ py.test -rs
then you can run it and will see at least two test skipped and always
at least one test executed::
then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
collecting ... collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /home/hpk/tmp/doc-exec-222/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
Note that if you specify a platform via the marker-command line option like this::
$ py.test -m linux2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
collecting ... collected 4 items
test_plat.py .
=================== 3 tests deselected by "-m 'linux2'" ====================
================== 1 passed, 3 deselected in 0.01 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
Late to the party, but I just solved a similar problem by adding a default marker to all unmarked tests.
As a direct answer to the Question: you can have unmarked tests always run, and include marked test only as specified via the -m option, by adding the following to the conftest.py
def pytest_collection_modifyitems(items, config):
# add `always_run` marker to all unmarked items
for item in items:
if not any(item.iter_markers()):
item.add_marker("always_run")
# Ensure the `always_run` marker is always selected for
markexpr = config.getoption("markexpr", 'False')
config.option.markexpr = f"always_run or ({markexpr})"

Categories