I'm considering converting some unittest.TestCase tests into Pytest ones to take advantage of Pytest's fixtures. One feature of unittest that I wasn't able to easily find the equivalent of in Pytest, however, is the ability to create testing suites and run them. I currently often do something like this:
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
suite = unittest.TestSuite()
# suite.addTest(TestSomething('test_1'))
suite.addTest(TestSomething('test_2'))
runner = unittest.TextTestRunner()
runner.run(suite)
By commenting in and out the lines with addTest, I can easily select which tests to run. How would I do something similar with Pytest?
You can use the -k argument to run specific tests. For example
# put this in test.py
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
Running all tests in the class TestSomething can be done like this:
py.test test.py -k TestSomething
Running only test_2:
py.test test.py -k "TestSomething and test_2"
More examples in the documentation
Another way to go is to use special test names. These can be configures in the pytest.ini file.
# content of pytest.ini
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=*_check
Another way is to take action in conftest.py. In this example the collect_ignore config variable is used. It is a list of test paths that are to be ignored. In this example test_somthing.py is always ignored for collection. test_other_module_py2.py is ignored if we are testing with a python 3.
# content of conftest.py
import sys
collect_ignore = ["test_something/test_something.py"]
if sys.version_info[0] > 2:
collect_ignore.append("test_other/test_other_module_py2.py")
Since pytest 2.6 it is also possible to omit classes from test registration like this:
# Will not be discovered as a test
class TestClass:
__test__ = False
These examples were loosely taken from the documentation of pytest chapter Changing standard (Python) test discovery
In addition to using -k filters, you can name specific test classes or cases you want to run,
py.test test.py::TestSomething::test_2
Would run just test_2
Think the best way to do this is to use custom pytest markers.
You should mark specific tests (which you want to run) with
#pytest.mark.mymarkername
And run only tests with the custom marker using command:
py.test -v -m mymarkername
Here you can find more info regarding markers:
http://doc.pytest.org/en/latest/example/markers.html
Building on mbatchkarov's answer, since the names of my tests can get quite lengthy, I would like to still be able to select tests by commenting in and out lines and hitting "Cntrl+B" in Sublime (or "Cntrl+R" using the Atom Runner). One way to do this is as follows:
import unittest
import pytest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
tests_to_run = []
# tests_to_run.append('TestSomething and test_1')
tests_to_run.append('TestSomething and test_2')
tests_to_run = " or ".join(tests_to_run)
args = [__file__, '-k', tests_to_run]
pytest.main(args)
The idea behind this is that because Pytest accepts a string expression to match tests (rather than just a list of tests), one must generate a list of expressions matching one test only, and concatenate them using or.
Related
I have a bunch of tests which I decided to put within a class, sample code is below:
class IntegrationTests:
#pytest.mark.integrationtest
#pytest.mark.asyncio
async def test_job(self):
assert await do_stuff()
However, when I try to run the tests:
pipenv run pytest -v -m integrationtest, they are not detected at all, where I got the following before moving them to a class:
5 passed, 4 deselected in 0.78 seconds
I now get this:
2 passed, 4 deselected in 0.51 seconds
Why does pytest not detect these tests? Are test classes not supported?
The name of the class needs to start with Test for the pytest discovery to find it.
class TestIntegration:
#pytest.mark.integrationtest
#pytest.mark.asyncio
async def test_job(self):
assert await do_stuff()
See Conventions for Python test discovery
Create a pytest.ini
From the docs:
In case you need to change the naming convention for test files, classes and tests, you can create a file pytest.ini, and set the options python_files, python_classes, and python_functions:
Example:
# content of pytest.ini
# Example 1: have pytest look for "check" instead of "test"
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files = check_*.py
python_classes = *Tests
python_functions = *_check
In your case, if you don't want to change the name of the class IntegrationTests, set python_classes to *Tests.
Running tests inside a class
pytest /path/to/test_file_name.py::ClassName
Running a test inside a class
pytest /path/to/test_file_name.py::ClassName::test_name
To run all the tests under the class, "TestIntegration" you can use:
pytest -k TestIntegration
put the decorator above the class, the class with tests inside is like a group already.
#pytest.mark.smoke1
class TestClass():
Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook
I have two module with two different classes and their corresponding test classes.
foo.py
------
class foo(object):
def fooMethod(self):
// smthg
bar.py
------
class bar(object):
def barMethod(self):
// smthg
fooTest.py
------
class fooTest(unittest.TestCase):
def fooMethodTest(self):
// smthg
barTest.py
------
class barTest(unittest.TestCase):
def barMethodTest(self):
// smthg
In any, test and source module, file, I erased the if __name__ == "__main__": because of increasing coherency and obeying object-oriented ideology.
Like in Java unit test, I'm looking for creating a module to run all unittest. For example,
runAllTest.py
-------------
class runAllTest(unittest.TestCase):
?????
if __name__ == "__main__":
?????
I looked for search engine but didn't find any tutorial or example. Is it possible to do so? Why? or How?
Note: I'm using eclipse and pydev distribution on windows machine.
When running unit tests based on the built-in python unittest module, at the root level of your project run
python -m unittest discover <module_name>
For the specific example above, it suffices to run
python -m unittest discover .
https://docs.python.org/2/library/unittest.html
You could create a TestSuite and run all your tests in it's if __name__ == '__main__' block:
import unittest
def create_suite():
test_suite = unittest.TestSuite()
test_suite.addTest(fooTest())
test_suite.addTest(barTest())
return test_suite
if __name__ == '__main__':
suite = create_suite()
runner=unittest.TextTestRunner()
runner.run(suite)
If you do not want to create the test cases manually look at this quesiton/answer, which basically creates the test cases dynamically, or use some of the features of the unittest module like test discovery feature and command line options ..
I think what you are looking for is the TestLoader. With this you can load specific tests or modules or load everything under a given directory. Also, this post has some useful examples using a TestSuite instance.
EDIT: The code I usually have in my test.py:
if not popts.tests:
suite = unittest.TestLoader().discover(os.path.dirname(__file__)+'/tests')
#print(suite._tests)
# Print outline
lg.info(' * Going for Interactive net tests = '+str(not tvars.NOINTERACTIVE))
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
else:
lg.info(' * Running specific tests')
suite = unittest.TestSuite()
# Load standard tests
for t in popts.tests:
test = unittest.TestLoader().loadTestsFromName("tests."+t)
suite.addTest(test)
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
Does two things:
If -t flag (tests) is not present, find and load all tests in directory
Else, load the requested tests one-by-one
I think you could just run the following command under the folder where your tests files are located:
python -m unittest
as mentioned here in the doc that "when executed without arguments Test Discovery is started"
With PyDev right click on a folder in Eclipse and choose "Run as-> Python unit-test". This will run all tests in that folder (the names of the test files and methods have to start with "test_".)
You are looking for nosetests.
You might need to rename your files; I'm not sure about the pattern nose uses to find the test files but, personally, I use *_test.py. It is possible to specify a custom pattern which your project uses for test filenames but I remember being unable to make it work so I ended up renaming my tests instead.
You also need to follow PEP 328 conventions to work with nose. I don't use IDEs with Python but your IDE may already follow it---just read the PEP and check.
With a PEP 328 directory/package structure, you can run individual tests as
nosetests path.to.class_test
Note that instead of the usual directory separators (/ or \), I used dots.
To run all tests, simply invoke nosetests at the root of your project.
In nosetests, I know that you can specify which tests you want to run via a nosetests config file as such:
[nosetests]
tests=testIWT_AVW.py:testIWT_AVW.tst_bynd1,testIWT_AVW.py:testIWT_AVW.tst_bynd3
However, the above just looks messy and becomes harder to maintain when a lot of tests are added, especially without being able to use linebreaks. I found it a lot more convenient to be able to specify which tests I want to run using unittests TestSuite feature. e.g.
def custom_suite():
suite = unittest.TestSuite()
suite.addTest(testIWT_AVW('tst_bynd1'))
suite.addTest(testIWT_AVW('tst_bynd3'))
return suite
if __name__=="__main__":
runner = unittest.TextTestRunner()
runner.run(custom_suite())
Question: How do I specify which tests should be run by nosetests within my .py file? Thanks.
P.S. If there is a way to specify tests via a nosetest config file that doesn't force all tests to be written on one line I would be open to it as well, as a second alternative
I'm not entirely sure whether you want to run the tests programmatically or from the command line. Either way this should cover both:
import itertools
from nose.loader import TestLoader
from nose import run
from nose.suite import LazySuite
paths = ("/path/to/my/project/module_a",
"/path/to/my/project/module_b",
"/path/to/my/project/module_c")
def run_my_tests():
all_tests = ()
for path in paths:
all_tests = itertools.chain(all_tests, TestLoader().loadTestsFromDir(path))
suite = LazySuite(all_tests)
run(suite=suite)
if __name__ == '__main__':
run_my_tests()
Note that the nose.suite.TestLoader object has a number of different methods available for loading tests.
You can call the run_my_tests method from other code or you can run this from the command line with a python interpreter, rather than through nose. If you have other nose configuration, you may need to pass that in programmatically as well.
If I'm correctly understanding your question, you have several options here:
you can mark your tests with special nose decorators: istest and nottest. See docs
you can mark tests with tags
you can join test cases in test suites. I haven't used it by myself, but it seems that you have to override nose's default test discovery to respect your test suites (see docs)
Hope that helps.
Lets say I have the following testcases in different files
TestOne.py {tags: One, Two}
TestTwo.py {tags: Two}
TestThree.py {tags: Three}
Each of which inherits from unittest.TestCase. Is there any ability in python to embed metadata information within these files, so that I can have a main.py script to search for those tags and execute only those testcases?
For Eg: If I want to execute testcases with {tags: Two} then only testcases TestOne.py and TestTwo.py should be executed.
The py.test testing framework has support for meta data, via what they call markers.
For py.test test cases are functions that have names starting with "test", and which are in modules with names starting with "test". The tests themselves are simple assert statements. py.test can also run tests for the unittest library, and IIRC Nose tests.
The meta data consists of dynamically generated decorators for the test functions. The decorators have the form: #pytest.mark.my_meta_name. You can choose anything for my_meta_name. There are a few predefined markers that you can see with py.test --markers.
Here is an adapted snippet from their documentation:
# content of test_server.py
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_always_succeeds():
assert 2 == 3 - 1
def test_will_always_fail():
assert 4 == 5
You select marked tests with the -m command line option of the test runner. To selectively run test_send_http() you enter this into a shell:
py.test -v -m webtest
Of course it's more easy to define tags in the main module, but if it's important for you to save them with test files, it could be a good solution to define it in test files like this:
In TestOne.py:
test_tags = ['One', 'Two']
...
Then you can read all tags in the initialize function of your main module in this way:
test_modules = ['TestOne', 'TestTwo', 'TestThree']
test_tags_dict = {}
def initialize():
for module_name in test_modules:
module = import_string(module)
if hasattr(module, 'test_tags'):
for tag in module.test_tags:
if tag not in test_tags_dict:
test_tags_dict[tag] = []
test_tags_dict[tag].append(module)
So you can implement a run_test_with_tag function to run all tests for an specific tag:
def run_test_with_tag(tag):
for module in test_tags_dict.get(tag, []):
# Run module tests here ...