I am using tox to run tests on different envs using tox -p(run in parallel), but have a problem with coverage report generation for all tests.
tox.ini:
[tox]
envlist = env1,ev2,report
skipsdist=True
[base]
deps = pytest
[testenv:env1]
deps = custom-package-1
{[base]deps}
commands = pytest --cov-append tests/flows/test_1.py
[testenv:env2]
deps = custom-package-2
{[base]deps}
commands = pytest --cov-append tests/flows/test_2.py
[testenv:report]
deps = coverage[toml]
commands = coverage report
depends = env1,env2
parallel_show_output = true
pyproject.toml coverage section:
[tool.coverage.report]
fail_under = 100
show_missing = true
exclude_lines = [
'pragma: no cover',
'\.\.\.',
'if TYPE_CHECKING:',
"if __name__ == '__main__':",
]
Error:
No source for code: '/Users/my_user/projects/my_proect/flows/__init__.py'.
Can someone tell me what is wrong with provided configuration?
You need to remap the source files see https://coverage.readthedocs.io/en/6.2/config.html?highlight=paths#paths and for example https://github.com/tox-dev/tox/blob/master/tox.ini#L136-L143
Related
I am using flake8 (with flakehell but that should not interfere) and keep its configuration in a pyproject.toml file. I want to add a per-file-ignores config but nothing works and there is no documentation on how it is supposed to be formatted in a toml file.
Flake8 docs show only the 'native' config file format:
per-file-ignores =
project/__init__.py:F401
setup.py:E121
other_project/*:W9
There is no description / example for pyproject.toml.
I tried:
per-file-ignores=["file1.py:W0621", "file2.py:W0621"]
and
per-file-ignores={"file1.py" = "W0621", "file2.py" = "W0621"}
both of which silently fail and have no effect (the warning is still raised).
What is the proper syntax for per-file-ignores setting in flake8/flakehell while using pyproject.toml?
flake8 does not have support for pyproject.toml, only .flake8, setup.cfg, and tox.ini
disclaimer: I am the flake8 maintainer
Currently, pyproject-flake8 enables you to write your flake8 settings on pyproject.toml like this.
# pyproject.toml
[tool.flake8]
exclude = ".venv"
max-complexity = 10
max-line-length = 100
extend-ignore = """
W503,
E203,
E701,
"""
per-file-ignores = """
__init__.py: F401
./src/*: E402
"""
I want to disable all pytest internal warnings like PytestCacheWarning in pytest.ini but currently have no luck with it. The following ini file doesn't work as I expect:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
What is the right way to do it? Note: I don't want to disable all warnings, only those defined inside pytest implementation.
Minimal reproducible example:
1) Create the following structure:
some_dir/
.pytest_cache/
test_something.py
pytest.ini
2) Put this into test_something.py file:
def test_something():
assert False
3) Put this into pytest.ini file:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
4) do chmod 444 .pytest_cache to procude PytestCacheWarning: could not create cache path warning
5) run pytest:
========================== test session starts ===========================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/sanyash/repos/reproduce_pytest_bug, inifile: pytest.ini
plugins: celery-4.4.0, aiohttp-0.3.0
collected 1 item
test_something.py F [100%]
================================ FAILURES ================================
_____________________________ test_something _____________________________
def test_something():
> assert False
E assert False
test_something.py:2: AssertionError
============================ warnings summary ============================
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/stepwise
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/nodeids
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/lastfailed
self.warn("could not create cache path {path}", path=path)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
===================== 1 failed, 3 warnings in 0.03s ======================
You must use the import path to ignore it:
[pytest]
filterwarnings =
ignore::pytest.PytestCacheWarning
so for all pytest warnings you would use the common base class:
[pytest]
filterwarnings =
ignore::pytest.PytestWarning
I have a my_module.py file that implements my_module and a file test_my_module.py that does import my_module and runs some tests written with pytest on it.
Normally I run the tests by cding into the directory that contains these two files and then doing
pytest
Now I want to use Bazel. I've added my_module.py as a py_binary but I don't know what the right way to invoke my tests is.
If you want to create a reusable code, that don't need add call to pytest add end of every python file with test. You can create py_test call that call a python file wrapping a call to pytest and keeping all argument. Then create a macro around the py_test. I explain the detailed solution in Experimentations on Bazel: Python (3), linter & pytest, with link to source code.
Create the python tool (wrapp call to pytest, or only pylint) in tools/pytest/pytest_wrapper.py
import sys
import pytest
# if using 'bazel test ...'
if __name__ == "__main__":
sys.exit(pytest.main(sys.argv[1:]))
Create the macro in tools/pytest/defs.bzl
"""Wrap pytest"""
load("#rules_python//python:defs.bzl", "py_test")
load("#my_python_deps//:requirements.bzl", "requirement")
def pytest_test(name, srcs, deps = [], args = [], data = [], **kwargs):
"""
Call pytest
"""
py_test(
name = name,
srcs = [
"//tools/pytest:pytest_wrapper.py",
] + srcs,
main = "//tools/pytest:pytest_wrapper.py",
args = [
"--capture=no",
"--black",
"--pylint",
"--pylint-rcfile=$(location //tools/pytest:.pylintrc)",
# "--mypy",
] + args + ["$(location :%s)" % x for x in srcs],
python_version = "PY3",
srcs_version = "PY3",
deps = deps + [
requirement("pytest"),
requirement("pytest-black"),
requirement("pytest-pylint"),
# requirement("pytest-mypy"),
],
data = [
"//tools/pytest:.pylintrc",
] + data,
**kwargs
)
expose some resources from tools/pytest/BUILD.bazel
exports_files([
"pytest_wrapper.py",
".pylintrc",
])
Call it from your package BUILD.bazel
load("//tools/pytest:defs.bzl", "pytest_test")
...
pytest_test(
name = "test",
srcs = glob(["*.py"]),
deps = [
...
],
)
then calling bazel test //... means that pylint, pytest and black are all part of the test flow.
Add the following code to test_my_module.py and mark the test script as a py_test instead of py_binary in your BUILD file:
if __name__ == "__main__":
import pytest
raise SystemExit(pytest.main([__file__]))
Then you can run your tests with bazel test test_my_module
Following on from #David Bernard, who wrote his answer in an awesome series of blog posts BTW, there is a curve-ball there with pytest + bazel + Windows...
Long story short, you'll need to add legacy_create_init = 0 to the py_test rule call.
This is a workaround a "feature" where bazel will create __init__.py files in the sandbox, even when none were present in your repo https://github.com/bazelbuild/rules_python/issues/55
It seems a bunch of the suggestions here have been packaged up into https://github.com/caseyduquettesc/rules_python_pytest now.
load("#rules_python_pytest//python_pytest:defs.bzl", "py_pytest_test")
py_pytest_test(
name = "test_w_pytest",
size = "small",
srcs = ["test.py"],
deps = [
# TODO Add this for the user
requirement("pytest"),
],
)
Edit: I'm the author of the above repository
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True
I have a long run test, which lasts 2 days, which I don't want to include in a usual test run. I also don't want to type command line parameters, that would deselect it and other tests at every usual test run. I would prefer to select a default-deselected test, when I actually need it. I tried renaming the test from test_longrun to longrun and use the command
py.test mytests.py::longrun
but that does not work.
Alternatively to the pytest_configure solution above I had found pytest.mark.skipif.
You need to put pytest_addoption() into conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
And you use skipif in the test file.
import pytest
longrun = pytest.mark.skipif("not config.getoption('longrun')")
def test_usual(request):
assert False, 'usual test failed'
#longrun
def test_longrun(request):
assert False, 'longrun failed'
In the command line
py.test
will not execute test_longrun(), but
py.test --longrun
will also execute test_longrun().
try to decorate your test as #pytest.mark.longrun
in your conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
def pytest_configure(config):
if not config.option.longrun:
setattr(config.option, 'markexpr', 'not longrun')
This is a slightly different way.
Decorate your test with #pytest.mark.longrun:
#pytest.mark.longrun
def test_something():
...
At this point you can run everything except tests marked with that using -m 'not longrun'
$ pytest -m 'not longrun'
or if you only want to run the longrun marked tests,
$ pytest -m 'longrun'
But, to make -m 'not longrun' the default, in pytest.ini, add it to addopts:
[pytest]
addopts =
-m 'not longrun'
...
If you want to run all the tests, you can do
$ pytest -m 'longrun or not longrun'