I have several test modules that are all invoked together via a driver script that can take a variety of arguments. The tests themselves are written using the python unittest module.
import optparse
import unittest
import sys
import os
from tests import testvalidator
from tests import testmodifier
from tests import testimporter
#modify the path so that the test modules under /tests have access to the project root
sys.path.insert(0, os.path.dirname(__file__))
def run(verbosity):
if verbosity == "0":
sys.stdout = open(os.devnull, 'w')
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testvalidator.TestValidator))
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testmodifier.TestModifier))
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testimporter.TestDataImporter))
unittest.TextTestRunner(verbosity=int(verbosity)).run(test_suite)
if __name__ == "__main__":
#a simple way to control output verbosity
parser = optparse.OptionParser()
parser.add_option("--verbosity", "--verbosity", dest="verbosity", default="0")
(options, args) = parser.parse_args()
run(options.verbosity)
My issue is that, within these test modules, I have certain tests I'd like to skip based on different parameters passed to the driver. I'm aware that unittest provides a family of decorators meant to do this, but I don't know the best way to pass this information on to the individual modules. If I had a --skip-slow argument, for example, how could I then annotate tests as slow, and have them skipped?
Thank you for your time.
I had in fact been wondering this myself, and finally found the solution.
main file...
...
if __name__ == '__main__':
args = argparser()
from tests import *
...
And in your test modules, just do:
from __main__ import args
print args
I tested this out, and it worked rather nicely. Nice thing is how simple it is, and it's not too much of a hack at all.
You can use nose test runner with the attrib plugin that lets you select test cases based on attributes. In particular, the example in the plugin documentation uses #attr(slow) to mark slow test cases.
After that, from the command line:
To select all the test cases marked as slow:
$ nosetests -a slow
To select all the test cases not marked as slow:
$ nosetests -a '!slow'
Here's how I solved this problem. At the bottom of my module, I put this code to set a global variable based on the presence of a --slow argument in argv:
if __name__ == "__main__":
try:
i = sys.argv.index("--slow")
run_slow_tests=True
del sys.argv[i]
except ValueError:
pass
unittest.main()
Then at the beginning of test functions which would be slow to run, I put this statement. It raises the unittest.SkipTest() exception if the flag isn't set to include slow tests.
if not run_slow_tests:
raise unittest.SkipTest('Slow test skipped, unless --slow given in sys.argv.')
Then when I invoke the module normally, the slow tests are skipped.
% python src/my_test.py -v
test_slow (__main__.Tests) ... skipped 'Slow test skipped, unless --slow given in sys.argv.'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
And when I add the --slow, the slow tests in that module run:
% python src/my_test.py -v --slow
test_slow (__main__.Tests) ... ok
----------------------------------------------------------------------
Ran 1 test in 10.110s
OK
Unfortunately, this doesn't work with Unittest's test discovery.
% python -m unittest discover src "*_test.py" --slow
usage: python -m unittest discover [-h] [-v] [-q] [--locals] [-f] [-c] [-b]
[-k TESTNAMEPATTERNS] [-s START]
[-p PATTERN] [-t TOP]
python -m unittest discover: error: unrecognized arguments: --slow
It also didn't work to use the #unittest.SkipUnless() decorator. I suspect this is because the decorator evaluates its arguments at module definition time, but the argument isn't set to the correct value until module run time, which is later.
It isn't perfect, but it lets me work within the Python standard library. A requirement like this is a good reason to adopt a better framework, such as nose tests. For my current project, I prefer to avoid installing any outside modules.
Related
Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook
I'm considering converting some unittest.TestCase tests into Pytest ones to take advantage of Pytest's fixtures. One feature of unittest that I wasn't able to easily find the equivalent of in Pytest, however, is the ability to create testing suites and run them. I currently often do something like this:
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
suite = unittest.TestSuite()
# suite.addTest(TestSomething('test_1'))
suite.addTest(TestSomething('test_2'))
runner = unittest.TextTestRunner()
runner.run(suite)
By commenting in and out the lines with addTest, I can easily select which tests to run. How would I do something similar with Pytest?
You can use the -k argument to run specific tests. For example
# put this in test.py
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
Running all tests in the class TestSomething can be done like this:
py.test test.py -k TestSomething
Running only test_2:
py.test test.py -k "TestSomething and test_2"
More examples in the documentation
Another way to go is to use special test names. These can be configures in the pytest.ini file.
# content of pytest.ini
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=*_check
Another way is to take action in conftest.py. In this example the collect_ignore config variable is used. It is a list of test paths that are to be ignored. In this example test_somthing.py is always ignored for collection. test_other_module_py2.py is ignored if we are testing with a python 3.
# content of conftest.py
import sys
collect_ignore = ["test_something/test_something.py"]
if sys.version_info[0] > 2:
collect_ignore.append("test_other/test_other_module_py2.py")
Since pytest 2.6 it is also possible to omit classes from test registration like this:
# Will not be discovered as a test
class TestClass:
__test__ = False
These examples were loosely taken from the documentation of pytest chapter Changing standard (Python) test discovery
In addition to using -k filters, you can name specific test classes or cases you want to run,
py.test test.py::TestSomething::test_2
Would run just test_2
Think the best way to do this is to use custom pytest markers.
You should mark specific tests (which you want to run) with
#pytest.mark.mymarkername
And run only tests with the custom marker using command:
py.test -v -m mymarkername
Here you can find more info regarding markers:
http://doc.pytest.org/en/latest/example/markers.html
Building on mbatchkarov's answer, since the names of my tests can get quite lengthy, I would like to still be able to select tests by commenting in and out lines and hitting "Cntrl+B" in Sublime (or "Cntrl+R" using the Atom Runner). One way to do this is as follows:
import unittest
import pytest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
tests_to_run = []
# tests_to_run.append('TestSomething and test_1')
tests_to_run.append('TestSomething and test_2')
tests_to_run = " or ".join(tests_to_run)
args = [__file__, '-k', tests_to_run]
pytest.main(args)
The idea behind this is that because Pytest accepts a string expression to match tests (rather than just a list of tests), one must generate a list of expressions matching one test only, and concatenate them using or.
I have two module with two different classes and their corresponding test classes.
foo.py
------
class foo(object):
def fooMethod(self):
// smthg
bar.py
------
class bar(object):
def barMethod(self):
// smthg
fooTest.py
------
class fooTest(unittest.TestCase):
def fooMethodTest(self):
// smthg
barTest.py
------
class barTest(unittest.TestCase):
def barMethodTest(self):
// smthg
In any, test and source module, file, I erased the if __name__ == "__main__": because of increasing coherency and obeying object-oriented ideology.
Like in Java unit test, I'm looking for creating a module to run all unittest. For example,
runAllTest.py
-------------
class runAllTest(unittest.TestCase):
?????
if __name__ == "__main__":
?????
I looked for search engine but didn't find any tutorial or example. Is it possible to do so? Why? or How?
Note: I'm using eclipse and pydev distribution on windows machine.
When running unit tests based on the built-in python unittest module, at the root level of your project run
python -m unittest discover <module_name>
For the specific example above, it suffices to run
python -m unittest discover .
https://docs.python.org/2/library/unittest.html
You could create a TestSuite and run all your tests in it's if __name__ == '__main__' block:
import unittest
def create_suite():
test_suite = unittest.TestSuite()
test_suite.addTest(fooTest())
test_suite.addTest(barTest())
return test_suite
if __name__ == '__main__':
suite = create_suite()
runner=unittest.TextTestRunner()
runner.run(suite)
If you do not want to create the test cases manually look at this quesiton/answer, which basically creates the test cases dynamically, or use some of the features of the unittest module like test discovery feature and command line options ..
I think what you are looking for is the TestLoader. With this you can load specific tests or modules or load everything under a given directory. Also, this post has some useful examples using a TestSuite instance.
EDIT: The code I usually have in my test.py:
if not popts.tests:
suite = unittest.TestLoader().discover(os.path.dirname(__file__)+'/tests')
#print(suite._tests)
# Print outline
lg.info(' * Going for Interactive net tests = '+str(not tvars.NOINTERACTIVE))
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
else:
lg.info(' * Running specific tests')
suite = unittest.TestSuite()
# Load standard tests
for t in popts.tests:
test = unittest.TestLoader().loadTestsFromName("tests."+t)
suite.addTest(test)
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
Does two things:
If -t flag (tests) is not present, find and load all tests in directory
Else, load the requested tests one-by-one
I think you could just run the following command under the folder where your tests files are located:
python -m unittest
as mentioned here in the doc that "when executed without arguments Test Discovery is started"
With PyDev right click on a folder in Eclipse and choose "Run as-> Python unit-test". This will run all tests in that folder (the names of the test files and methods have to start with "test_".)
You are looking for nosetests.
You might need to rename your files; I'm not sure about the pattern nose uses to find the test files but, personally, I use *_test.py. It is possible to specify a custom pattern which your project uses for test filenames but I remember being unable to make it work so I ended up renaming my tests instead.
You also need to follow PEP 328 conventions to work with nose. I don't use IDEs with Python but your IDE may already follow it---just read the PEP and check.
With a PEP 328 directory/package structure, you can run individual tests as
nosetests path.to.class_test
Note that instead of the usual directory separators (/ or \), I used dots.
To run all tests, simply invoke nosetests at the root of your project.
I'm using nose to run my "unittest" tests and have nose-cov to include coverage reports. These all work fine, but part of my tests require running some code as a multiprocessing.Process. The nose-cov docs state that it can do multiprocessing, but I'm not sure how to get that to work.
I'm just running tests by running nosetests and using the following .coveragerc:
[run]
branch = True
parallel = True
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
#if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def __main__\(\):
omit =
mainserver/tests/*
EDIT:
I fixed the parallel switch in my ".coveragerc" file. I've also tried adding a sitecustomize.py like so in my site-packages directory:
import os
import coverage
os.environ['COVERAGE_PROCESS_START']='/sites/metrics_dev/.coveragerc'
coverage.process_startup()
I'm pretty sure it's still not working properly, though, because the "missing" report still shows lines that I know are running (they output to the console). I've also tried adding the environment variable in my test case file and also in the shell before running the test cases. I also tried explicitly calling the same things in the function that's called by multiprocessing.Process to start the new process.
First, the configuration setting you need is parallel, not parallel-mode. Second, you probably need to follow the directions in the Measuring Subprocesses section of the coverage.py docs.
Another thing to consider is if you see more than one coverage file while running coverage. Maybe it's only a matter of combining them afterwards.
tl;dr — to use coverage + nosetests + nose’s --processes option, set coverage’s --concurrency option to multiprocessing, preferably in either .coveragerc or setup.cfg rather than on the command-line (see also: command line usage and configuration files).
Long version…
I also fought this for a while, having followed the documentation on Configuring Python for sub-process coverage to the letter. Finally, upon re-examining the output of coverage run --help a bit more closely, I stumbled across the --concurrency=multiprocessing option, which I’d never used before, and which seems to be the missing link. (In hindsight, this makes sense: nose’s --processing option uses the multiprocessing library under the hood.)
Here is a minimal configuration that works as expected:
unit.py:
def is_even(x):
if x % 2 == 0:
return True
else:
return False
test.py:
import time
from unittest import TestCase
import unit
class TestIsEvenTrue(TestCase):
def test_is_even_true(self):
time.sleep(1) # verify multiprocessing is being used
self.assertTrue(unit.is_even(2))
# use a separate class to encourage nose to use a separate process for this
class TestIsEvenFalse(TestCase):
def test_is_even_false(self):
time.sleep(1)
self.assertFalse(unit.is_even(1))
setup.cfg:
[nosetests]
processes = 2
verbosity = 2
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source = unit
sitecustomize.py (note: located in site-packages)
import os
try:
import coverage
os.environ['COVERAGE_PROCESS_START'] = 'setup.cfg'
coverage.process_startup()
except ImportError:
pass
$ coverage run $(command -v nosetests)
test_is_even_false (test.TestIsEvenFalse) ... ok
test_is_even_true (test.TestIsEvenTrue) ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.085s
OK
$ coverage combine && coverage report
Name Stmts Miss Branch BrPart Cover
-------------------------------------------
unit.py 4 0 2 0 100%
I am trying to get into testing in Python using the doctest module. At the moment I do
Write the tests for the functions.
implement the functions code.
If Tests pass, write more tests and more code.
When the function is done move on to the next function to implement.
So after 3 or 4 (independent) functions in the same module with many tests I get a huge output by doctest. And it is a little annoysing.
Is there a way to tell doctest "don't test functions a(), b() and c()", so that it runs only the unmarked functions?
I only found the doctest.SKIP flag, which is not sufficient for my needs. I would have to place this flag in a lot of lines. And if I would want to check a marked function again, I would have to go manually through the code and remove any flag I set inside.
looks like you could pass the function to run_docstring_examples:
def f(a, b, c):
'''
>>> f(1,2,3)
42
'''
if __name__ == '__main__':
import doctest
# doctest.testmod()
doctest.run_docstring_examples(f, globals())
example found via google.
I put together a helper script to make this a little less painful. It can be installed using:
pip install doctestfn
It can then be used as follows:
usage: doctestfn [-h] [-v] module function
Run doctests for one function
positional arguments:
module Module to load
function Function to test
optional arguments:
-h, --help show this help message and exit
-v, --verbose Enable verbose doctest output