I'm using the pytest-flake8 plugin to lint my Python code.
Everytime I run the linting like this:
pytest --flake8
In addition to the linting, all tests are run.
But I would like to run the linter checks only.
How can I configure pytest so it only lints the code but skips all my tests, preferrably via commandline (or conftest.py) - without having to add skip markers to my tests?
flake8 tests are marked with the flake8 marker, so you can select only those by running:
pytest --flake8 -m flake8
Pytests --ignore <<path>> option also works well here if all of your tests are in one directory.
I usually hide this behind a make command. In this case my Makefile and tests directory are both at the root of the repository.
.PHONY: lint
lint:
pytest --flake8 --ignore tests
I had the same problem and after some digging, I realized that I just want to run flake8:
flake8 <path to folder>
That's it. No need to run anything else as your flake8 configuration is independent of PyTest.
You can alter the test run logic yourself, for example by ignoring collected tests when --flake8 arg passed:
# conftest.py
def pytest_collection_modifyitems(session, config, items):
if config.getoption('--flake8'):
items[:] = [item for item in items if item.get_closest_marker('flake8')]
Now only flake8 tests will be executed, the rest will be just ignored.
After some more thoughts, this is the solution I came up with - and it works with pytest 5.3.5 (get_marker from https://stackoverflow.com/a/52891274/319905 does not exist anymore).
It allows me to run specific linting checks via commandline.
Since I still like to keep the option of running both linting checks and tests, I added a flag telling pytest if it should only do linting.
Usage:
# Run only flake8 and mypy, no tests
pytest --lint-only --flake8 --mypy
# Run tests and flake8
pytest --flake8
Code:
# conftest.py
def pytest_addoption(parser):
parser.addoption(
"--lint-only",
action="store_true",
default=False,
help="Only run linting checks",
)
def pytest_collection_modifyitems(session, config, items):
if config.getoption("--lint-only"):
lint_items = []
for linter in ["flake8", "black", "mypy"]:
if config.getoption(f"--{linter}"):
lint_items.extend(
[item for item in items if item.get_closest_marker(linter)]
)
items[:] = lint_items
Related
I use pytest for testing my Python project. What I want to do is to add to my test suite a function to check whether my code is formatted in "Black" or not. When I press the command "pytest --black" my whole project is tested as I want to. How can I add this function in my tests suite and check the same thing when I run the "python setup.py pytest" command?
pytest.ini has an addopts key which does literally what it says on the tin: it adds options to the pytest command line you used.
So at the root of your program, you should be able to just create a pytest.ini file containing:
[pytest]
addopts = --black
and it should automatically enable black-as-test on your pytest runs.
Context
I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins.
The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.
I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.
Repository layout
pkg_root/
.tox/
py3/
lib/
python3.7/
site-pacakges/
plugin_module/
supporting_module.py
plugin.py
some_data.dat
plugin_module/
supporting_module.py
plugin.py
some_data.dat
tests/
conftest.py
test_my_plugin.py
tox.ini
setup.py
Some relevant snippets with commentary:
tox.ini
[pytest]
addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
testpaths = tests
This configuration gives me an error that no data was collected; no htmlcov is created in this case.
If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.
conftest.py
pytest_plugins = ['pytester'] # Entire contents of file!
test_my_plugin.py
def test_a_thing(testdir):
testdir.makepyfile(
"""
def test_that_fixture(my_fixture):
assert my_fixture.foo == 'bar'
"""
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?
Instead of using the pytest-cov plugin, use coverage to run pytest:
coverage run -m pytest ....
That way, coverage will be started before pytest.
You can achieve what you want without pytest-cov.
❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
OR SHORTER
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
Example: (for your directory structure)
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items
tests/test_my_plugin.py::test_my_plugin PASSED [100%]
======================== 1 passed in 0.04s =========================
Name Stmts Miss Cover Missing
-------------------------------------------------------------
plugin_module/supporting_module.py 4 0 100%
plugin_module/plugin.py 6 0 100%
-------------------------------------------------------------
TOTAL 21 0 100%
For an even nicer output, you can use:
❯ coverage html && open htmlcov/index.html
Documentation
❯ coverage -h
❯ pytest -h
coverage
run -- Run a Python program and measure code execution.
-m, --module --- Show line numbers of statements in each module that weren't executed.
--source=SRC1,SRC2, --- A list of packages or directories of code to be measured.
report -- Report coverage stats on modules.
-m, --show-missing --- Show line numbers of statements in each module that weren't executed.
html -- Create an HTML report.
pytest
-v, --verbose -- increase verbosity.
I've got a test suite set up and have been using pytest and pytest-django.
To give some background: I am trying to do some integration testing with a headless browser and have pytest ignore certain tests that are used where I am using a standard (non-headless) browser; I'd like to deselect those tests using a real, visual browser so that they don't trigger in my CI/CD pipeline.
Given the example pytest.ini below, if I run pytest launcher it shows as I would expect that there is 1 test being deselected (the StandardBrowserTestCases class only has 1 test in it).
However, if I run pytest other_app (which also has a StandardBrowserTestCases class) it does not show anything being deselected and StandardBrowserTestCases is ran with the other tests and not deselected.
[pytest]
addopts =
--nomigrations
--cov-config=.coveragerc
--cov=my_project
--cov=other_app
--cov=launcher
--cov=people
; # Ignore the StandardBrowserTestCases - these are only used for local
; # development / visual debug and contain no real valuable tests
--deselect=other_app/tests/test_integrations.py::StandardBrowserTestCases
--deselect=launcher/tests/test_integrations.py::StandardBrowserTestCases
--deselect=people/tests/test_integrations.py::StandardBrowserTestCases
--junitxml=./test-results/junit.xml
--cov-report html:./test-results/htmlcov
--html=./test-results/test_results.html
--self-contained-html
DJANGO_SETTINGS_MODULE = my_project.unit-test-settings
python_files = tests.py test_*.py *_tests.py
Am I using --deselect right? How can I figure out (troubleshoot) to see why it works on one app (launcher) but not the other (other_app)? How can I get pytest to deselect those tests from each app instead of just one?
I have a Django project with multiple apps. Each app has a set of unittests. I'm using pytest as my test runner. We have gotten to a point that we want to start writing integration tests. I was wondering if there is any way to keep the naming convention and thus the auto discovery of pytest but still be able (via flag maybe?) to run the different test types. The most intuitive solution that comes to mind is some sort of decorator on test methods or even TestCase classes (something like Category in JUnit).
something like:
#testtype('unittest')
def test_my_test(self):
# do some testing
#testtype('integration')
def test_my_integration_test(self):
# do some integration testing
and then i could run the test like:
py.test --type=integration
py.test --type=unittest
Is there such a thing?
If not, the only other solution i can think about is to add a django command and "manually" build a testsuite and run it with pytest... I would prefer not to use this option. Is there any other solution that can help me?
Thanks
You can mark test functions.
import pytest
#pytest.mark.unittest
def test_my_test(self):
# do some testing
#pytest.mark.integration
def test_my_integration_test(self):
# do some integration testing
These custom markers must be registered in your pytest.ini file.
Then use the -m flag to run the marked tests
py.test -v -m unittest
Another option would be to split your tests into unittest and integration directories. Then you can run tests in a specific directory with:
py.test -v unittest
or
py.test -v integration
Another way to do this (without any config or code)
pytest -o "python_functions=*_integration_test"
You can also do this in module/class level, e.g.,
python_files = integration_test_*.py
python_classes = IntegrationTest
Ref:
https://docs.pytest.org/en/latest/example/pythoncollection.html#changing-naming-conventions
We have recently moved away from our custom test runner / discovery tool in favor of py.test. For proper unit test reporting when running under teamcity there exists a pytest plugin: https://github.com/JetBrains/teamcity-python
When installed with:
python setup.py install
The plugin is discovered correctly by pytest. However, we don't want to install pytest & this plugin on our build machines. Instead we would rather have them packaged as part of our projects "tools" directory.
How do install / configure py.test to "discover" this plugin. We have tried adding pytest_plugins = "teamcity" with several variations to the setup.cfg file of pytest with no success.
There is no "pytest_plugins" configuration variable (see output at the end of "py.test -h). However, the env var "PYTEST_PLUGINS" contains comma-separated python import paths. So if you plugin is in "teamcity/util/myplugin.py" you would set it like this:
export PYTEST_PLUGINS=teamcity.util.myplugin
You can also use a command line option "-p teamcity.util.myplugin" to achieve a similar effect. And then add it to your setup.cfg's section:
[pytest]
addopts = -p teamcity.util.myplugin