Can pytest run tests within a test class? - python

I have a bunch of tests which I decided to put within a class, sample code is below:
class IntegrationTests:
#pytest.mark.integrationtest
#pytest.mark.asyncio
async def test_job(self):
assert await do_stuff()
However, when I try to run the tests:
pipenv run pytest -v -m integrationtest, they are not detected at all, where I got the following before moving them to a class:
5 passed, 4 deselected in 0.78 seconds
I now get this:
2 passed, 4 deselected in 0.51 seconds
Why does pytest not detect these tests? Are test classes not supported?

The name of the class needs to start with Test for the pytest discovery to find it.
class TestIntegration:
#pytest.mark.integrationtest
#pytest.mark.asyncio
async def test_job(self):
assert await do_stuff()
See Conventions for Python test discovery

Create a pytest.ini
From the docs:
In case you need to change the naming convention for test files, classes and tests, you can create a file pytest.ini, and set the options python_files, python_classes, and python_functions:
Example:
# content of pytest.ini
# Example 1: have pytest look for "check" instead of "test"
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files = check_*.py
python_classes = *Tests
python_functions = *_check
In your case, if you don't want to change the name of the class IntegrationTests, set python_classes to *Tests.
Running tests inside a class
pytest /path/to/test_file_name.py::ClassName
Running a test inside a class
pytest /path/to/test_file_name.py::ClassName::test_name

To run all the tests under the class, "TestIntegration" you can use:
pytest -k TestIntegration

put the decorator above the class, the class with tests inside is like a group already.
#pytest.mark.smoke1
class TestClass():

Related

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

Group test execution by class in pytest

I have the testcases structured in the following way.
app
test_login.py
class1
test_11
test_12
test_function1.py
class2
test_21
test_22
When I run "pytest.exe app", pytest is able to identify all the testcases, but it executes in random order. For example, test11, test22, test12 and so on
Is there any way I can change this and execute all testcases in a file::class first and then move on to another file::class?
it executes in random order
By default, tests are sorted by modules; inside the modules, tests are executed in the order they are specified. So you should get the rough order like this:
$ pytest --collect-only -q
test_function1.py::class2::test_21
test_function1.py::class2::test_22
test_login.py::class1::test_11
test_login.py::class1::test_12
Is there any way I can change this and execute all testcases in a file::class first and then move on to another file::class?
If you want to change the default execution order, you can do it in the pytest_collection_modifyitems hook. Example that reorders the collected tests by class name, then by test name:
# conftest.py
import operator
def pytest_collection_modifyitems(items):
items.sort(key=operator.attrgetter('cls.__name__', 'name'))
Now tests in test_login will be executed before those in test_function1 because the module names are not counted in the ordering anymore:
$ pytest --collect-only -q
test_login.py::class1::test_11
test_login.py::class1::test_12
test_function1.py::class2::test_21
test_function1.py::class2::test_22
Following code block solved my issue. Thanks #hoefling for your time and help.
# conftest.py
#pytest.hookimpl(hookwrapper=True)
def pytest_collection_modifyitems(items):
yield
items.sort(key=operator.attrgetter('cls.__name__'))

Controlling which tests run using pytest

I'm considering converting some unittest.TestCase tests into Pytest ones to take advantage of Pytest's fixtures. One feature of unittest that I wasn't able to easily find the equivalent of in Pytest, however, is the ability to create testing suites and run them. I currently often do something like this:
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
suite = unittest.TestSuite()
# suite.addTest(TestSomething('test_1'))
suite.addTest(TestSomething('test_2'))
runner = unittest.TextTestRunner()
runner.run(suite)
By commenting in and out the lines with addTest, I can easily select which tests to run. How would I do something similar with Pytest?
You can use the -k argument to run specific tests. For example
# put this in test.py
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
Running all tests in the class TestSomething can be done like this:
py.test test.py -k TestSomething
Running only test_2:
py.test test.py -k "TestSomething and test_2"
More examples in the documentation
Another way to go is to use special test names. These can be configures in the pytest.ini file.
# content of pytest.ini
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=*_check
Another way is to take action in conftest.py. In this example the collect_ignore config variable is used. It is a list of test paths that are to be ignored. In this example test_somthing.py is always ignored for collection. test_other_module_py2.py is ignored if we are testing with a python 3.
# content of conftest.py
import sys
collect_ignore = ["test_something/test_something.py"]
if sys.version_info[0] > 2:
collect_ignore.append("test_other/test_other_module_py2.py")
Since pytest 2.6 it is also possible to omit classes from test registration like this:
# Will not be discovered as a test
class TestClass:
__test__ = False
These examples were loosely taken from the documentation of pytest chapter Changing standard (Python) test discovery
In addition to using -k filters, you can name specific test classes or cases you want to run,
py.test test.py::TestSomething::test_2
Would run just test_2
Think the best way to do this is to use custom pytest markers.
You should mark specific tests (which you want to run) with
#pytest.mark.mymarkername
And run only tests with the custom marker using command:
py.test -v -m mymarkername
Here you can find more info regarding markers:
http://doc.pytest.org/en/latest/example/markers.html
Building on mbatchkarov's answer, since the names of my tests can get quite lengthy, I would like to still be able to select tests by commenting in and out lines and hitting "Cntrl+B" in Sublime (or "Cntrl+R" using the Atom Runner). One way to do this is as follows:
import unittest
import pytest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
tests_to_run = []
# tests_to_run.append('TestSomething and test_1')
tests_to_run.append('TestSomething and test_2')
tests_to_run = " or ".join(tests_to_run)
args = [__file__, '-k', tests_to_run]
pytest.main(args)
The idea behind this is that because Pytest accepts a string expression to match tests (rather than just a list of tests), one must generate a list of expressions matching one test only, and concatenate them using or.

pytest: How to pass in values and create a test fixture from command line?

I have a simple test as shown below:
# contents of test_example
def test_addition(numbers):
assert numbers < 5
And below is my conftest
# contents of conftest
import pytest
#pytest.fixture(params=[1, 2, 3, 4])
def numbers(request):
return request.param
However now I want to test the numbers 5 and 6but not have to explicitly hardcode that. On command line, I would like to override the numbers test fixture with the numbers 5 and 6 such that:
py.test test_example.py --numbers=[5, 6]
I would expect the result of the above invocation to overwrite the conftest numbers test fixture with my test fixture created at command line and run test_addition() on 5 and 6 only.
How would I go about doing this?
reading here you can
tests/conftest.py
def pytest_addoption(parser):
parser.addoption("--numbers", action="store", dest="numbers",
default="1,2,3,4")
def pytest_generate_tests(metafunc):
if 'number' in metafunc.fixturenames:
metafunc.parametrize("number", metafunc.config.option.numbers.split(','))
tests/test_1.py
def test_numbers(number):
assert number
so:
$ py.test tests/ -vv
=========================================
collected 4 items
test_1.py::test_numbers[1] PASSED
test_1.py::test_numbers[2] PASSED
test_1.py::test_numbers[3] PASSED
test_1.py::test_numbers[4] PASSED
and
$ py.test tests/ -vv --numbers=10,11
=========================================
collected 2 items
test_1.py::test_numbers[10] PASSED
test_1.py::test_numbers[11] PASSED
anyway please note here:
Warning:
This function must be implemented in a plugin and is called once at the beginning of a test run.
Implementing this hook from conftest.py files is strongly discouraged because conftest.py files are lazily loaded and may give strange unknown option errors depending on the directory py.test is invoked from.
so this code works if you run
py.test tests/
but not if
cd tests
py.test

Is there a way to add metadata in py files for grouping tests?

Lets say I have the following testcases in different files
TestOne.py {tags: One, Two}
TestTwo.py {tags: Two}
TestThree.py {tags: Three}
Each of which inherits from unittest.TestCase. Is there any ability in python to embed metadata information within these files, so that I can have a main.py script to search for those tags and execute only those testcases?
For Eg: If I want to execute testcases with {tags: Two} then only testcases TestOne.py and TestTwo.py should be executed.
The py.test testing framework has support for meta data, via what they call markers.
For py.test test cases are functions that have names starting with "test", and which are in modules with names starting with "test". The tests themselves are simple assert statements. py.test can also run tests for the unittest library, and IIRC Nose tests.
The meta data consists of dynamically generated decorators for the test functions. The decorators have the form: #pytest.mark.my_meta_name. You can choose anything for my_meta_name. There are a few predefined markers that you can see with py.test --markers.
Here is an adapted snippet from their documentation:
# content of test_server.py
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_always_succeeds():
assert 2 == 3 - 1
def test_will_always_fail():
assert 4 == 5
You select marked tests with the -m command line option of the test runner. To selectively run test_send_http() you enter this into a shell:
py.test -v -m webtest
Of course it's more easy to define tags in the main module, but if it's important for you to save them with test files, it could be a good solution to define it in test files like this:
In TestOne.py:
test_tags = ['One', 'Two']
...
Then you can read all tags in the initialize function of your main module in this way:
test_modules = ['TestOne', 'TestTwo', 'TestThree']
test_tags_dict = {}
def initialize():
for module_name in test_modules:
module = import_string(module)
if hasattr(module, 'test_tags'):
for tag in module.test_tags:
if tag not in test_tags_dict:
test_tags_dict[tag] = []
test_tags_dict[tag].append(module)
So you can implement a run_test_with_tag function to run all tests for an specific tag:
def run_test_with_tag(tag):
for module in test_tags_dict.get(tag, []):
# Run module tests here ...

Categories