I want to disable all pytest internal warnings like PytestCacheWarning in pytest.ini but currently have no luck with it. The following ini file doesn't work as I expect:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
What is the right way to do it? Note: I don't want to disable all warnings, only those defined inside pytest implementation.
Minimal reproducible example:
1) Create the following structure:
some_dir/
.pytest_cache/
test_something.py
pytest.ini
2) Put this into test_something.py file:
def test_something():
assert False
3) Put this into pytest.ini file:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
4) do chmod 444 .pytest_cache to procude PytestCacheWarning: could not create cache path warning
5) run pytest:
========================== test session starts ===========================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/sanyash/repos/reproduce_pytest_bug, inifile: pytest.ini
plugins: celery-4.4.0, aiohttp-0.3.0
collected 1 item
test_something.py F [100%]
================================ FAILURES ================================
_____________________________ test_something _____________________________
def test_something():
> assert False
E assert False
test_something.py:2: AssertionError
============================ warnings summary ============================
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/stepwise
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/nodeids
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/lastfailed
self.warn("could not create cache path {path}", path=path)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
===================== 1 failed, 3 warnings in 0.03s ======================
You must use the import path to ignore it:
[pytest]
filterwarnings =
ignore::pytest.PytestCacheWarning
so for all pytest warnings you would use the common base class:
[pytest]
filterwarnings =
ignore::pytest.PytestWarning
Related
We're using pytest and python std logging, and have some tests in doctests. We'd like to enable log_cli to make debugging tests in ide easier (lets stderr flow to the "live" console so one can see log statements as they are output when stepping through) The problem is there appears to be a bug/iteraction between "use of logging" (eg presence of a call to logger.info("...") etc) log_cli=true.
I don't see any other flags or mention of this in the docs, so it appears to be a bug, but was hoping there is a workaround.
This test module passes:
# bugjar.py
"""
>>> dummy()
'retval'
"""
import logging
logger = logging.getLogger(__name__)
def dummy():
# logger.info("AnInfoLog") ## un-comment to break test
return "retval"
but un-commenting (no other changes) the logger.info( call causes a failure: (unless i remove log_cli from pytest.ini)
002 >>> dummy()
Expected:
'retval'
Got nothing
Here is my command line & relevant version output(s):
(venv) $ ./venv/bin/pytest -c ./pytest.ini ./bugjar.py
======================================================================= test session starts ========================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: -omitted-/test-unit, configfile: ./pytest.ini
plugins: forked-1.3.0, xdist-2.2.1, profiling-1.7.0
collected 1 item
bugjar.py::bugjar
bugjar.py::bugjar
-------------------------------------------------------------------------- live log call ---------------------------------------------------------------------------
INFO bugjar:bugjar.py:8 AnInfoLog
FAILED [100%]
============================================================================= FAILURES =============================================================================
_________________________________________________________________________ [doctest] bugjar _________________________________________________________________________
001
002 >>> dummy()
Expected:
'retval'
Got nothing
and my pytest.ini (Note, commments are correct, pass/fail is not affected by use of logfile or other addopts
[pytest]
addopts = --doctest-modules # --profile --profile-svg
norecursedirs = logs bin tmp* scratch venv* data
# log_file = pytest.log
log_cli = true
log_cli_level=debug
removing log_cli* from pytest.ini makes the issue go away.
This seems clearly related to what log_cli is manipulating, in capturing output for use in the doctest itself, but also not the expected behavior.
I am hoping I've made a mistake, or there is a workaround to get live log output in bash or IDE shell window / debugger.
I'm parameterizing pytest tests with variables defined in an external YAML file using the pytest_generate_tests hook. The name of the variable file is specified on the pytest command line (--params_file). Only some of the test functions within a module are parameterized and require the variables in this file. Thus, the command line option defining the variables is an optional argument. If the optional argument is omitted from the command line, then I want pytest to just "skip" those test functions which need the external parameterized variables and just run the "other" tests which are not parameterized. The problem is, if the command line option is omitted, pytest is skipping ALL of the test functions, not just the test functions that require the parameters.
Here is the test module file:
def test_network_validate_1(logger, device_connections,):
### Test code omitted.....
def test_lsp_throttle_timers(params_file, logger, device_connections):
### Test code omitted.....
def test_network_validate_2(logger, device_connections,):
### Test code omitted.....
pytest_generate_tests hook in conftest.py:
# Note, I tried scope at function level as well but that did not help
#pytest.fixture(scope='session')
def params_file(request):
pass
def pytest_generate_tests(metafunc):
### Get Pytest rootdir
rootdir = metafunc.config.rootdir
print(f"*********** Test Function: {metafunc.function.__name__}")
if "params_file" in metafunc.fixturenames:
print("*********** Hello Silver ****************")
if metafunc.config.getoption("--params_file"):
#################################################################
# Params file now located as a separate command line argument for
# greater flexibility
#################################################################
params_file = metafunc.config.getoption("--params_file")
params_doc = dnet_generic.open_yaml_file(Path(rootdir, params_file),
loader_type=yaml.Loader)
test_module = metafunc.module.__name__
test_function = metafunc.function.__name__
names,values = dnet_generic.get_test_parameters(test_module,
test_function,
params_doc,)
metafunc.parametrize(names, values )
else:
pytest.skip("This test requires the params_file argument")
When the params_file option is present, everything works fine:
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --params_file common/topoA_params.yml --collect-only
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items *********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Test Function: test_network_validate_2
collected 3 items
<Package '/home/as2863/pythonProjects/p1-automation/isis'>
<Module 'test_isis_lsp_throttle.py'>
<Function 'test_network_validate_1'>
<Function 'test_lsp_throttle_timers'>
<Function 'test_network_validate_2'>
================================================================================ no tests ran in 0.02 seconds =================================================================================
When the params_file option is ommitted, you can see that no tests are run and the print statement shows it does not even try to run pytest_generate_tests on "test_network_validate_2"
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --collect-only ===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items
*********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Hello Silver ****************
collected 0 items / 1 skipped
================================================================================== 1 skipped in 0.11 seconds ==================================================================================
As has been found in the discussion in the comments, you cannot use pytest.skip in pytest_generate_tests, because it will work on module scope. To skip the concrete test, you can do something like this:
#pytest.fixture
def skip_test():
pytest.skip('Some reason')
def pytest_generate_tests(metafunc):
if "params_file" in metafunc.fixturenames:
if metafunc.config.getoption("--params_file"):
...
metafunc.parametrize(names, values )
else:
metafunc.fixturenames.insert(0, 'skip_test')
E.g. you introduce a fixture that will skip the concrete test, and add this fixture to the test. Make sure to insert it as the first fixture, so no other fixtures will be executed.
While MrBean Bremen's answer may work, according to the pytest authors dynamically altering the fixture list is not something they really want to support. This approach, however, is a bit more supported.
# This is "auto used", but doesn't always skip the test unless the test parameters require it
#pytest.fixture(autouse=True)
def skip_test(request):
# For some reason this is only conditionally set if a param is passed
# https://github.com/pytest-dev/pytest/blob/791b51d0faea365aa9474bb83f9cd964fe265c21/src/_pytest/fixtures.py#L762
if not hasattr(request, 'param'):
return
pytest.skip(f"Test skipped: {request.param}")
And in your test module:
def _add_flag_parameter(metafunc: pytest.Metafunc, name: str):
if name not in metafunc.fixturenames:
return
flag_value = metafunc.config.getoption(name)
if flag_value:
metafunc.parametrize(name, [flag_value])
else:
metafunc.parametrize("skip_test", ["Missing flag '{name}'"], indirect=True)
def pytest_generate_tests(metafunc: pytest.Metafunc):
_add_flag_parameter(metafunc, "params_file")
I have this little project where I use pytest and pytest-dependency with tox to develop integration tests on some code. Until now I used one base class (BTestClass) with some common tests in the root directory and the specific tests for each code component in a test_Component.py file next to it implementing a TestC class that inherits from BTestClass.
Everything worked fine until then. Now I want to add a BTestClass2 for another set of components. So I added another layer of inheritance, but now it doesn't work, pytest validates the common A tests but then skips the tests that depend on it. I have no idea why.
Here's the filesystem layout:
λ tree /F
Folder PATH listing
Volume serial number is F029-7357
C:.
│ B.py
│ requirements-tox.txt
│ tox.ini
│
├───app_C
│ └───tests
│ test_C.py
│
└───common
A.py
common\A.py
import pytest
class ATestClass():
#pytest.mark.dependency(name='test_a')
def test_a(self):
assert True
B.py
import pytest
from common.A import ATestClass
class BTestClass(ATestClass):
#pytest.mark.dependency(name='test_b', depends=['test_a'])
def test_b(self):
assert True
test_C.py
import pytest
import sys
sys.path.append('.')
from B import *
class TestC(BTestClass):
#pytest.mark.dependency(name='test_c', depends=['test_b'])
def test_c(self):
assert True
pytest output:
λ tox -- -rs
py38 installed: ...
py38 run-test-pre: PYTHONHASHSEED='367'
py38 run-test: commands[0] | pytest -x -v -rs
=============================================== test session starts ===============================================
platform win32 -- Python 3.8.1, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- ...\poc\.tox\py38\scripts\python.exe
cachedir: .tox\py38\.pytest_cache
rootdir: ...\poc
plugins: dependency-0.5.1
collected 3 items
app_C/tests/test_C.py::TestC::test_b SKIPPED [ 33%]
app_C/tests/test_C.py::TestC::test_c SKIPPED [ 66%]
app_C/tests/test_C.py::TestC::test_a PASSED [100%]
============================================= short test summary info =============================================
SKIPPED [1] .tox\py38\lib\site-packages\pytest_dependency.py:103: test_b depends on test_a
SKIPPED [1] .tox\py38\lib\site-packages\pytest_dependency.py:103: test_c depends on test_b
===================================== 1 passed, 2 skipped, 1 warning in 0.14s =====================================
_____________________________________________________ summary _____________________________________________________
py38: commands succeeded
congratulations :)
Any idea why test_b is skipped and not executed?
Edit: If I make BTestClass standalone, removing A / ATestClass from the picture, it works fine.
collected 2 items
app_C/tests/test_C.py::TestC::test_b PASSED [ 50%]
app_C/tests/test_C.py::TestC::test_c PASSED [100%]
In pytest-dependency, a dependency on another test implies that that test runs before the dependent test. If that is not the case (in your example test_b is run before test_a, because test_a is located in a subdirectory), the test is just skipped. pytest-dependency doesn't do any reordering of tests (unfortunately).
If you cannot easily establish the order in which tests are run via naming, you may use the pytest-ordering plugin to bring the tests into the needed order. In your case you could do:
class ATestClass:
#pytest.mark.dependency(name='test_a')
#pytest.mark.run(order=0)
def test_a(self):
assert True
...
class BTestClass(ATestClass):
#pytest.mark.dependency(name='test_b', depends=['test_a'])
#pytest.mark.run(order=1)
def test_b(self):
assert True
In this case, the tests are run in the order test_a - test_b - test_c, and all tests will run.
UPDATE:
You can also use pytest-order, which is a fork of pytest-ordering. If you use the pytest option --order-dependencies it will try to re-order the tests with dependencies created by pytest-dependencies, without the need to add extra marks.
Disclaimer: I'm the author of that fork.
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True
I have this testing code:
import pytest
def params():
dont_skip = pytest.mark.skipif(False, reason="don't skip")
return [dont_skip("foo"), dont_skip("bar")]
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip please")
def test_foo(param):
assert False
Yet test_foo is not skipped, even though there are skipif decorators attached to test_foo (I tried in both orders, as you can see above):
============================= test session starts ==============================
platform darwin -- Python 3.5.0, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Volumes/Home/Users/Waleed/tmp/python/explainerr/test, inifile:
collected 2 items
test/test_example.py FF
=================================== FAILURES ===================================
________________________________ test_foo[foo] _________________________________
param = 'foo'
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip")
def test_foo(param):
> assert False
E assert False
test/test_example.py:13: AssertionError
________________________________ test_foo[bar] _________________________________
param = 'bar'
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip")
def test_foo(param):
> assert False
E assert False
test/test_example.py:13: AssertionError
=========================== 2 failed in 0.01 seconds ===========================
If I change this line
dont_skip = pytest.mark.skipif(False, reason="don't skip")
to
dont_skip = pytest.mark.skipif(True, reason="don't skip")
then it skips the test cases:
============================= test session starts ==============================
platform darwin -- Python 3.5.0, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Volumes/Home/Users/Waleed/tmp/python/explainerr/test, inifile:
collected 2 items
test/test_example.py ss
========================== 2 skipped in 0.01 seconds ===========================
How do I get pytest.mark.skipif to work when also using skippable parameters with pytest.mark.parametrize? I'm using Python 3.5.0 with Pytest 2.8.5.
The pytest version that you are using is very old. I used your code in my env and two tests are skipped at first skipif marks.
python3 -m pytest test_b.py
==================================================================================== test session starts =====================================================================================
platform darwin -- Python 3.6.5, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
rootdir:<redacted>, inifile:
collected 2 items
test_b.py ss [100%]
====================================================================================== warnings summary ======================================================================================
test_b.py:9: RemovedInPytest4Warning: Applying marks directly to parameters is deprecated, please use pytest.param(..., marks=...) instead.
For more details, see: https://docs.pytest.org/en/latest/parametrize.html
#pytest.mark.skipif(True, reason="always skip")
test_b.py:9: RemovedInPytest4Warning: Applying marks directly to parameters is deprecated, please use pytest.param(..., marks=...) instead.
For more details, see: https://docs.pytest.org/en/latest/parametrize.html
#pytest.mark.skipif(True, reason="always skip")
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== 2 skipped, 2 warnings in 0.03 seconds ============================================================================