pytest log_cli + standard logging = breaks doctest - python

We're using pytest and python std logging, and have some tests in doctests. We'd like to enable log_cli to make debugging tests in ide easier (lets stderr flow to the "live" console so one can see log statements as they are output when stepping through) The problem is there appears to be a bug/iteraction between "use of logging" (eg presence of a call to logger.info("...") etc) log_cli=true.
I don't see any other flags or mention of this in the docs, so it appears to be a bug, but was hoping there is a workaround.
This test module passes:
# bugjar.py
"""
>>> dummy()
'retval'
"""
import logging
logger = logging.getLogger(__name__)
def dummy():
# logger.info("AnInfoLog") ## un-comment to break test
return "retval"
but un-commenting (no other changes) the logger.info( call causes a failure: (unless i remove log_cli from pytest.ini)
002 >>> dummy()
Expected:
'retval'
Got nothing
Here is my command line & relevant version output(s):
(venv) $ ./venv/bin/pytest -c ./pytest.ini ./bugjar.py
======================================================================= test session starts ========================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: -omitted-/test-unit, configfile: ./pytest.ini
plugins: forked-1.3.0, xdist-2.2.1, profiling-1.7.0
collected 1 item
bugjar.py::bugjar
bugjar.py::bugjar
-------------------------------------------------------------------------- live log call ---------------------------------------------------------------------------
INFO bugjar:bugjar.py:8 AnInfoLog
FAILED [100%]
============================================================================= FAILURES =============================================================================
_________________________________________________________________________ [doctest] bugjar _________________________________________________________________________
001
002 >>> dummy()
Expected:
'retval'
Got nothing
and my pytest.ini (Note, commments are correct, pass/fail is not affected by use of logfile or other addopts
[pytest]
addopts = --doctest-modules # --profile --profile-svg
norecursedirs = logs bin tmp* scratch venv* data
# log_file = pytest.log
log_cli = true
log_cli_level=debug
removing log_cli* from pytest.ini makes the issue go away.
This seems clearly related to what log_cli is manipulating, in capturing output for use in the doctest itself, but also not the expected behavior.
I am hoping I've made a mistake, or there is a workaround to get live log output in bash or IDE shell window / debugger.

Related

Can the unittest framework discover nested tests?

Is the following nested structure discoverable by unittest ?
class HerclTests(unittest.TestCase):
def testJobs(self):
def testJobSubmit():
jid = "foobar"
assert jid,'hercl job submit failed no job_id'
return jid
def testJobShow(jid):
jid = "foobar"
out,errout=bash(f"hercl job show --jid {jid} --form json")
assert 'Job run has been accepted by airflow successfully' in out,'hercl job show failed'
Here is the error when trying to run unittest :
============================= test session starts ==============================
platform darwin -- Python 3.6.7, pytest-5.4.3, py-1.10.0, pluggy-0.13.1 -- /Users/steve/git/hercl/.venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/steve/git/hercl/tests
collecting ... collected 0 items
ERROR: not found: /Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit
(no name '/Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit' in any of [<TestCaseFunction testJobs>])
============================ no tests ran in 0.01s =============================
Can this structure be tweaked to work with unittest or must each test method be elevated to the level of the HerclTests class?
This can't work - functions defined inside another function ("inner functions") only "exist" as variables inside the outer function's local scope. They're not accessible to any other code. ​The unittest discovery won't find them, and couldn't call them even if it knew about them.

pytest.skip within pytest_generate_tests skips all test functions in module instead of specific tests

I'm parameterizing pytest tests with variables defined in an external YAML file using the pytest_generate_tests hook. The name of the variable file is specified on the pytest command line (--params_file). Only some of the test functions within a module are parameterized and require the variables in this file. Thus, the command line option defining the variables is an optional argument. If the optional argument is omitted from the command line, then I want pytest to just "skip" those test functions which need the external parameterized variables and just run the "other" tests which are not parameterized. The problem is, if the command line option is omitted, pytest is skipping ALL of the test functions, not just the test functions that require the parameters.
Here is the test module file:
def test_network_validate_1(logger, device_connections,):
### Test code omitted.....
def test_lsp_throttle_timers(params_file, logger, device_connections):
### Test code omitted.....
def test_network_validate_2(logger, device_connections,):
### Test code omitted.....
pytest_generate_tests hook in conftest.py:
# Note, I tried scope at function level as well but that did not help
#pytest.fixture(scope='session')
def params_file(request):
pass
def pytest_generate_tests(metafunc):
### Get Pytest rootdir
rootdir = metafunc.config.rootdir
print(f"*********** Test Function: {metafunc.function.__name__}")
if "params_file" in metafunc.fixturenames:
print("*********** Hello Silver ****************")
if metafunc.config.getoption("--params_file"):
#################################################################
# Params file now located as a separate command line argument for
# greater flexibility
#################################################################
params_file = metafunc.config.getoption("--params_file")
params_doc = dnet_generic.open_yaml_file(Path(rootdir, params_file),
loader_type=yaml.Loader)
test_module = metafunc.module.__name__
test_function = metafunc.function.__name__
names,values = dnet_generic.get_test_parameters(test_module,
test_function,
params_doc,)
metafunc.parametrize(names, values )
else:
pytest.skip("This test requires the params_file argument")
When the params_file option is present, everything works fine:
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --params_file common/topoA_params.yml --collect-only
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items *********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Test Function: test_network_validate_2
collected 3 items
<Package '/home/as2863/pythonProjects/p1-automation/isis'>
<Module 'test_isis_lsp_throttle.py'>
<Function 'test_network_validate_1'>
<Function 'test_lsp_throttle_timers'>
<Function 'test_network_validate_2'>
================================================================================ no tests ran in 0.02 seconds =================================================================================
When the params_file option is ommitted, you can see that no tests are run and the print statement shows it does not even try to run pytest_generate_tests on "test_network_validate_2"
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --collect-only ===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items
*********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Hello Silver ****************
collected 0 items / 1 skipped
================================================================================== 1 skipped in 0.11 seconds ==================================================================================
As has been found in the discussion in the comments, you cannot use pytest.skip in pytest_generate_tests, because it will work on module scope. To skip the concrete test, you can do something like this:
#pytest.fixture
def skip_test():
pytest.skip('Some reason')
def pytest_generate_tests(metafunc):
if "params_file" in metafunc.fixturenames:
if metafunc.config.getoption("--params_file"):
...
metafunc.parametrize(names, values )
else:
metafunc.fixturenames.insert(0, 'skip_test')
E.g. you introduce a fixture that will skip the concrete test, and add this fixture to the test. Make sure to insert it as the first fixture, so no other fixtures will be executed.
While MrBean Bremen's answer may work, according to the pytest authors dynamically altering the fixture list is not something they really want to support. This approach, however, is a bit more supported.
# This is "auto used", but doesn't always skip the test unless the test parameters require it
#pytest.fixture(autouse=True)
def skip_test(request):
# For some reason this is only conditionally set if a param is passed
# https://github.com/pytest-dev/pytest/blob/791b51d0faea365aa9474bb83f9cd964fe265c21/src/_pytest/fixtures.py#L762
if not hasattr(request, 'param'):
return
pytest.skip(f"Test skipped: {request.param}")
And in your test module:
def _add_flag_parameter(metafunc: pytest.Metafunc, name: str):
if name not in metafunc.fixturenames:
return
flag_value = metafunc.config.getoption(name)
if flag_value:
metafunc.parametrize(name, [flag_value])
else:
metafunc.parametrize("skip_test", ["Missing flag '{name}'"], indirect=True)
def pytest_generate_tests(metafunc: pytest.Metafunc):
_add_flag_parameter(metafunc, "params_file")

How to disable internal pytest warnings?

I want to disable all pytest internal warnings like PytestCacheWarning in pytest.ini but currently have no luck with it. The following ini file doesn't work as I expect:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
What is the right way to do it? Note: I don't want to disable all warnings, only those defined inside pytest implementation.
Minimal reproducible example:
1) Create the following structure:
some_dir/
.pytest_cache/
test_something.py
pytest.ini
2) Put this into test_something.py file:
def test_something():
assert False
3) Put this into pytest.ini file:
[pytest]
filterwarnings:
ignore::pytest.PytestCacheWarning
4) do chmod 444 .pytest_cache to procude PytestCacheWarning: could not create cache path warning
5) run pytest:
========================== test session starts ===========================
platform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/sanyash/repos/reproduce_pytest_bug, inifile: pytest.ini
plugins: celery-4.4.0, aiohttp-0.3.0
collected 1 item
test_something.py F [100%]
================================ FAILURES ================================
_____________________________ test_something _____________________________
def test_something():
> assert False
E assert False
test_something.py:2: AssertionError
============================ warnings summary ============================
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/stepwise
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/nodeids
self.warn("could not create cache path {path}", path=path)
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137
/home/sanyash/.local/lib/python3.7/site-packages/_pytest/cacheprovider.py:137: PytestCacheWarning: could not create cache path /home/sanyash/repos/reproduce_pytest_bug/.pytest_cache/v/cache/lastfailed
self.warn("could not create cache path {path}", path=path)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
===================== 1 failed, 3 warnings in 0.03s ======================
You must use the import path to ignore it:
[pytest]
filterwarnings =
ignore::pytest.PytestCacheWarning
so for all pytest warnings you would use the common base class:
[pytest]
filterwarnings =
ignore::pytest.PytestWarning

How to generate coverage report for http based integration tests?

I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not.
Since I am not importing any module and not calling functions directly coverage.py report for this is 0%.
I want to know how can I generate coverage report for such integration HTTP request tests?
The recipe is pretty much this:
Ensure the backend starts in code coverage mode
Run the tests
Ensure the backend coverage is written to file
Read the coverage from file and append it to test run coverage
Example:
backend
Imagine you have a dummy backend server that responds with a "Hello World" page on GET requests:
# backend.py
from http.server import BaseHTTPRequestHandler, HTTPServer
class DummyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write('<html><body><h1>Hello World</h1></body></html>'.encode())
if __name__ == '__main__':
HTTPServer(('127.0.0.1', 8000), DummyHandler).serve_forever()
test
A simple test that makes an HTTP request and verifies the response contains "Hello World":
# tests/test_server.py
import requests
def test_GET():
resp = requests.get('http://127.0.0.1:8000')
resp.raise_for_status()
assert 'Hello World' in resp.text
Recipe
# tests/conftest.py
import os
import signal
import subprocess
import time
import coverage.data
import pytest
#pytest.fixture(autouse=True)
def run_backend(cov):
# 1.
env = os.environ.copy()
env['COVERAGE_FILE'] = '.coverage.backend'
serverproc = subprocess.Popen(['coverage', 'run', 'backend.py'], env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid)
time.sleep(3)
yield # 2.
# 3.
serverproc.send_signal(signal.SIGINT)
time.sleep(1)
# 4.
backendcov = coverage.data.CoverageData()
with open('.coverage.backend') as fp:
backendcov.read_fileobj(fp)
cov.data.update(backendcov)
cov is the fixture provided by pytest-cov (docs).
Running the test adds the coverage of backend.py to the overall coverage, although only tests selected:
$ pytest --cov=tests --cov-report term -vs
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 --
/data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/home/u0_a82/projects/stackoverflow/so-50689940, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 1 item
tests/test_server.py::test_GET PASSED
----------- coverage: platform linux, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
------------------------------------------
backend.py 12 0 100%
tests/conftest.py 18 0 100%
tests/test_server.py 5 0 100%
------------------------------------------
TOTAL 35 0 100%
============================ 1 passed in 5.09 seconds =============================
With Coverage 5.1, based on the "Measuring sub-processes" section of the coverage.py docs, you can set the COVERAGE_PROCESS_START env-var, call the coverage.process_startup() somewhere in your code. If you set parallel=True in your .coveragerc
Somewhere in your process, call this code:
import coverage
coverage.process_startup()
This can be done in sitecustomize.py globally, but in my case it was easy to add this to my application's __init__.py, where I added:
import os
if 'COVERAGE_PROCESS_START' in os.environ:
import coverage
coverage.process_startup()
Just to be safe, I added an additional check to this if statement (checking if MYAPP_COVERAGE_SUBPROCESS is also set)
In your test case, set the COVERAGE_PROCESS_START to the path to your .coveragerc file (or an empty string if don't need this config), for example:
import os
import subprocess
env = os.environ.copy()
env['COVERAGE_PROCESS_START'] = '.coveragerc'
cmd = [sys.executable, 'run_my_app.py']
p = subprocess.Popen(cmd, env=env)
p.communicate()
assert p.returncode == 0 # ..etc
Finally, you create .coveragerc containing:
[run]
parallel = True
source = myapp # Which module to collect coverage for
This ensures the .coverage files created by each process go to a unique file, which pytest-cov appears to merge automatically (or can be done manually with coverage combine). It also describes which modules to collect data for (the --cov=myapp arg doesn't get passed to child processes)
To run your tests, just invoke pytest --cov=

How to set dynamic default parameters for py.test?

I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True

Categories