python unittest with coverage report on (sub)processes - python

I'm using nose to run my "unittest" tests and have nose-cov to include coverage reports. These all work fine, but part of my tests require running some code as a multiprocessing.Process. The nose-cov docs state that it can do multiprocessing, but I'm not sure how to get that to work.
I'm just running tests by running nosetests and using the following .coveragerc:
[run]
branch = True
parallel = True
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
#if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def __main__\(\):
omit =
mainserver/tests/*
EDIT:
I fixed the parallel switch in my ".coveragerc" file. I've also tried adding a sitecustomize.py like so in my site-packages directory:
import os
import coverage
os.environ['COVERAGE_PROCESS_START']='/sites/metrics_dev/.coveragerc'
coverage.process_startup()
I'm pretty sure it's still not working properly, though, because the "missing" report still shows lines that I know are running (they output to the console). I've also tried adding the environment variable in my test case file and also in the shell before running the test cases. I also tried explicitly calling the same things in the function that's called by multiprocessing.Process to start the new process.

First, the configuration setting you need is parallel, not parallel-mode. Second, you probably need to follow the directions in the Measuring Subprocesses section of the coverage.py docs.

Another thing to consider is if you see more than one coverage file while running coverage. Maybe it's only a matter of combining them afterwards.

tl;dr — to use coverage + nosetests + nose’s --processes option, set coverage’s --concurrency option to multiprocessing, preferably in either .coveragerc or setup.cfg rather than on the command-line (see also: command line usage and configuration files).
Long version…
I also fought this for a while, having followed the documentation on Configuring Python for sub-process coverage to the letter. Finally, upon re-examining the output of coverage run --help a bit more closely, I stumbled across the --concurrency=multiprocessing option, which I’d never used before, and which seems to be the missing link. (In hindsight, this makes sense: nose’s --processing option uses the multiprocessing library under the hood.)
Here is a minimal configuration that works as expected:
unit.py:
def is_even(x):
if x % 2 == 0:
return True
else:
return False
test.py:
import time
from unittest import TestCase
import unit
class TestIsEvenTrue(TestCase):
def test_is_even_true(self):
time.sleep(1) # verify multiprocessing is being used
self.assertTrue(unit.is_even(2))
# use a separate class to encourage nose to use a separate process for this
class TestIsEvenFalse(TestCase):
def test_is_even_false(self):
time.sleep(1)
self.assertFalse(unit.is_even(1))
setup.cfg:
[nosetests]
processes = 2
verbosity = 2
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source = unit
sitecustomize.py (note: located in site-packages)
import os
try:
import coverage
os.environ['COVERAGE_PROCESS_START'] = 'setup.cfg'
coverage.process_startup()
except ImportError:
pass
$ coverage run $(command -v nosetests)
test_is_even_false (test.TestIsEvenFalse) ... ok
test_is_even_true (test.TestIsEvenTrue) ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.085s
OK
$ coverage combine && coverage report
Name Stmts Miss Branch BrPart Cover
-------------------------------------------
unit.py 4 0 2 0 100%

Related

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

Is it possible to run all unit test?

I have two module with two different classes and their corresponding test classes.
foo.py
------
class foo(object):
def fooMethod(self):
// smthg
bar.py
------
class bar(object):
def barMethod(self):
// smthg
fooTest.py
------
class fooTest(unittest.TestCase):
def fooMethodTest(self):
// smthg
barTest.py
------
class barTest(unittest.TestCase):
def barMethodTest(self):
// smthg
In any, test and source module, file, I erased the if __name__ == "__main__": because of increasing coherency and obeying object-oriented ideology.
Like in Java unit test, I'm looking for creating a module to run all unittest. For example,
runAllTest.py
-------------
class runAllTest(unittest.TestCase):
?????
if __name__ == "__main__":
?????
I looked for search engine but didn't find any tutorial or example. Is it possible to do so? Why? or How?
Note: I'm using eclipse and pydev distribution on windows machine.
When running unit tests based on the built-in python unittest module, at the root level of your project run
python -m unittest discover <module_name>
For the specific example above, it suffices to run
python -m unittest discover .
https://docs.python.org/2/library/unittest.html
You could create a TestSuite and run all your tests in it's if __name__ == '__main__' block:
import unittest
def create_suite():
test_suite = unittest.TestSuite()
test_suite.addTest(fooTest())
test_suite.addTest(barTest())
return test_suite
if __name__ == '__main__':
suite = create_suite()
runner=unittest.TextTestRunner()
runner.run(suite)
If you do not want to create the test cases manually look at this quesiton/answer, which basically creates the test cases dynamically, or use some of the features of the unittest module like test discovery feature and command line options ..
I think what you are looking for is the TestLoader. With this you can load specific tests or modules or load everything under a given directory. Also, this post has some useful examples using a TestSuite instance.
EDIT: The code I usually have in my test.py:
if not popts.tests:
suite = unittest.TestLoader().discover(os.path.dirname(__file__)+'/tests')
#print(suite._tests)
# Print outline
lg.info(' * Going for Interactive net tests = '+str(not tvars.NOINTERACTIVE))
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
else:
lg.info(' * Running specific tests')
suite = unittest.TestSuite()
# Load standard tests
for t in popts.tests:
test = unittest.TestLoader().loadTestsFromName("tests."+t)
suite.addTest(test)
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
Does two things:
If -t flag (tests) is not present, find and load all tests in directory
Else, load the requested tests one-by-one
I think you could just run the following command under the folder where your tests files are located:
python -m unittest
as mentioned here in the doc that "when executed without arguments Test Discovery is started"
With PyDev right click on a folder in Eclipse and choose "Run as-> Python unit-test". This will run all tests in that folder (the names of the test files and methods have to start with "test_".)
You are looking for nosetests.
You might need to rename your files; I'm not sure about the pattern nose uses to find the test files but, personally, I use *_test.py. It is possible to specify a custom pattern which your project uses for test filenames but I remember being unable to make it work so I ended up renaming my tests instead.
You also need to follow PEP 328 conventions to work with nose. I don't use IDEs with Python but your IDE may already follow it---just read the PEP and check.
With a PEP 328 directory/package structure, you can run individual tests as
nosetests path.to.class_test
Note that instead of the usual directory separators (/ or \), I used dots.
To run all tests, simply invoke nosetests at the root of your project.

how to know if my Python tests are running in coverage mode?

I am running Ned Batchelder's coverage module on continuous integration using Travis CI but I want to run only integration tests and skip functional ones because they take too long and coverage measurement is not affected by them.
I created a special configuration for this, but I want to know if there is an alternate method of knowing, inside a Python script, is the code is being run by coverage or not.
nose can definitely help with it:
Cover: code coverage plugin
Attribute selector plugin
you can mark tests with #attr("no-coverage") decorator and run your coverage tests with -a '!no-coverage' option
nose-exclude plugin
you can exclude specific test dirs and test files from running using --exclude-dir and --exclude-dir-file options
Hope that helps.
Based on the wording of your question I am assuming that you are not limiting what tests you are running with coverage and would like the functional tests to notice they are being run with coverage, and do nothing. A hacky way might be to look at sys.argv in the functional tests and do things differently if you detect coverage usage. But I think a better approach would be to have functional tests and unit tests in separate sibling directories, and tell coverage to run only the tests in the unit test directory. Potentially you could also use the --omit option to limit which tests are being run.
Travis CI provides a couple of environment variables that can be used for this; in my case any of this will serve:
CI=true
TRAVIS=true
even as both answers provided before were really useful, I think this solution is easier to implement for what I need.
I needed to determine if my tests were running under plain debug mode, with coverage, or just normally. After a good deal of experimentation I came up with this:
import sys
# Detect PyCharm debugging mode
get_trace = getattr(sys, 'gettrace', lambda: None)
if get_trace() is None:
debug = False
print('runnin normsies')
else:
debug = True
print('debuggin')
if 'coverage' in sys.modules.keys():
print('covered')
Not sure how robust it is, but it works for me.
Here's an implementation of the check whether a test is run in coverage mode. The nice thing about this is that you can use gettrace_result to check other conditions, e.g., whether the test is run by a debugger instead of coverage:
import sys
def is_run_with_coverage():
"""Check whether test is run with coverage."""
gettrace = getattr(sys, "gettrace", None)
if gettrace is None:
return False
else:
gettrace_result = gettrace()
try:
from coverage.pytracer import PyTracer
from coverage.tracer import CTracer
if isinstance(gettrace_result, (CTracer, PyTracer)):
return True
except ImportError:
pass
return False
You can use pytest.mark.skipif to skip tests that shouldn't be run in coverage mode.
#pytest.mark.skipif(is_run_with_coverage())
def test_to_skip_in_coverage_mode():
...
From the coverage.py documentation:
Coverage.py sets an environment variable, COVERAGE_RUN to indicate
that your code is running under coverage measurement.
Important: this option is only available in version 6.1 and later of the coverage module.
If detectcoverage.py contains:
import os
def detect_coverage():
return os.environ.get('COVERAGE_RUN', None) is not None
if detect_coverage():
print("running in coverage mode")
else:
print("not running in coverage mode")
then running that looks like:
$ coverage run detectcoverage.py
running in coverage mode
$ python detectcoverage.py
not running in coverage mode

python nosetests equivalent of unittest Testsuite in the test file

In nosetests, I know that you can specify which tests you want to run via a nosetests config file as such:
[nosetests]
tests=testIWT_AVW.py:testIWT_AVW.tst_bynd1,testIWT_AVW.py:testIWT_AVW.tst_bynd3
However, the above just looks messy and becomes harder to maintain when a lot of tests are added, especially without being able to use linebreaks. I found it a lot more convenient to be able to specify which tests I want to run using unittests TestSuite feature. e.g.
def custom_suite():
suite = unittest.TestSuite()
suite.addTest(testIWT_AVW('tst_bynd1'))
suite.addTest(testIWT_AVW('tst_bynd3'))
return suite
if __name__=="__main__":
runner = unittest.TextTestRunner()
runner.run(custom_suite())
Question: How do I specify which tests should be run by nosetests within my .py file? Thanks.
P.S. If there is a way to specify tests via a nosetest config file that doesn't force all tests to be written on one line I would be open to it as well, as a second alternative
I'm not entirely sure whether you want to run the tests programmatically or from the command line. Either way this should cover both:
import itertools
from nose.loader import TestLoader
from nose import run
from nose.suite import LazySuite
paths = ("/path/to/my/project/module_a",
"/path/to/my/project/module_b",
"/path/to/my/project/module_c")
def run_my_tests():
all_tests = ()
for path in paths:
all_tests = itertools.chain(all_tests, TestLoader().loadTestsFromDir(path))
suite = LazySuite(all_tests)
run(suite=suite)
if __name__ == '__main__':
run_my_tests()
Note that the nose.suite.TestLoader object has a number of different methods available for loading tests.
You can call the run_my_tests method from other code or you can run this from the command line with a python interpreter, rather than through nose. If you have other nose configuration, you may need to pass that in programmatically as well.
If I'm correctly understanding your question, you have several options here:
you can mark your tests with special nose decorators: istest and nottest. See docs
you can mark tests with tags
you can join test cases in test suites. I haven't used it by myself, but it seems that you have to override nose's default test discovery to respect your test suites (see docs)
Hope that helps.

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Categories