I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not.
Since I am not importing any module and not calling functions directly coverage.py report for this is 0%.
I want to know how can I generate coverage report for such integration HTTP request tests?
The recipe is pretty much this:
Ensure the backend starts in code coverage mode
Run the tests
Ensure the backend coverage is written to file
Read the coverage from file and append it to test run coverage
Example:
backend
Imagine you have a dummy backend server that responds with a "Hello World" page on GET requests:
# backend.py
from http.server import BaseHTTPRequestHandler, HTTPServer
class DummyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write('<html><body><h1>Hello World</h1></body></html>'.encode())
if __name__ == '__main__':
HTTPServer(('127.0.0.1', 8000), DummyHandler).serve_forever()
test
A simple test that makes an HTTP request and verifies the response contains "Hello World":
# tests/test_server.py
import requests
def test_GET():
resp = requests.get('http://127.0.0.1:8000')
resp.raise_for_status()
assert 'Hello World' in resp.text
Recipe
# tests/conftest.py
import os
import signal
import subprocess
import time
import coverage.data
import pytest
#pytest.fixture(autouse=True)
def run_backend(cov):
# 1.
env = os.environ.copy()
env['COVERAGE_FILE'] = '.coverage.backend'
serverproc = subprocess.Popen(['coverage', 'run', 'backend.py'], env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid)
time.sleep(3)
yield # 2.
# 3.
serverproc.send_signal(signal.SIGINT)
time.sleep(1)
# 4.
backendcov = coverage.data.CoverageData()
with open('.coverage.backend') as fp:
backendcov.read_fileobj(fp)
cov.data.update(backendcov)
cov is the fixture provided by pytest-cov (docs).
Running the test adds the coverage of backend.py to the overall coverage, although only tests selected:
$ pytest --cov=tests --cov-report term -vs
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 --
/data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/home/u0_a82/projects/stackoverflow/so-50689940, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 1 item
tests/test_server.py::test_GET PASSED
----------- coverage: platform linux, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
------------------------------------------
backend.py 12 0 100%
tests/conftest.py 18 0 100%
tests/test_server.py 5 0 100%
------------------------------------------
TOTAL 35 0 100%
============================ 1 passed in 5.09 seconds =============================
With Coverage 5.1, based on the "Measuring sub-processes" section of the coverage.py docs, you can set the COVERAGE_PROCESS_START env-var, call the coverage.process_startup() somewhere in your code. If you set parallel=True in your .coveragerc
Somewhere in your process, call this code:
import coverage
coverage.process_startup()
This can be done in sitecustomize.py globally, but in my case it was easy to add this to my application's __init__.py, where I added:
import os
if 'COVERAGE_PROCESS_START' in os.environ:
import coverage
coverage.process_startup()
Just to be safe, I added an additional check to this if statement (checking if MYAPP_COVERAGE_SUBPROCESS is also set)
In your test case, set the COVERAGE_PROCESS_START to the path to your .coveragerc file (or an empty string if don't need this config), for example:
import os
import subprocess
env = os.environ.copy()
env['COVERAGE_PROCESS_START'] = '.coveragerc'
cmd = [sys.executable, 'run_my_app.py']
p = subprocess.Popen(cmd, env=env)
p.communicate()
assert p.returncode == 0 # ..etc
Finally, you create .coveragerc containing:
[run]
parallel = True
source = myapp # Which module to collect coverage for
This ensures the .coverage files created by each process go to a unique file, which pytest-cov appears to merge automatically (or can be done manually with coverage combine). It also describes which modules to collect data for (the --cov=myapp arg doesn't get passed to child processes)
To run your tests, just invoke pytest --cov=
Related
Is the following nested structure discoverable by unittest ?
class HerclTests(unittest.TestCase):
def testJobs(self):
def testJobSubmit():
jid = "foobar"
assert jid,'hercl job submit failed no job_id'
return jid
def testJobShow(jid):
jid = "foobar"
out,errout=bash(f"hercl job show --jid {jid} --form json")
assert 'Job run has been accepted by airflow successfully' in out,'hercl job show failed'
Here is the error when trying to run unittest :
============================= test session starts ==============================
platform darwin -- Python 3.6.7, pytest-5.4.3, py-1.10.0, pluggy-0.13.1 -- /Users/steve/git/hercl/.venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/steve/git/hercl/tests
collecting ... collected 0 items
ERROR: not found: /Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit
(no name '/Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit' in any of [<TestCaseFunction testJobs>])
============================ no tests ran in 0.01s =============================
Can this structure be tweaked to work with unittest or must each test method be elevated to the level of the HerclTests class?
This can't work - functions defined inside another function ("inner functions") only "exist" as variables inside the outer function's local scope. They're not accessible to any other code. The unittest discovery won't find them, and couldn't call them even if it knew about them.
I am using pytest coverage and following I have the tests scripts in the command line that will generate the coverage reports for me:
"""Manager script to run the commands on the Flask API."""
import os
import click
from api import create_app
from briqy.logHelper import b as logger
import pytest
app = create_app()
#app.cli.command("tests")
#click.argument("option", required=False)
def run_test_with_option(option: str = None):
from subprocess import run
from shlex import split
if option is None:
raise SystemExit(
pytest.main(
[
"--disable-pytest-warnings",
"--cov=lib",
"--cov-config=.coveragerc",
"--cov-report=term",
"--cov-report=xml",
"--cov-report=html",
"--junitxml=./tests/coverage/junit.xml",
"--cov-append",
"--no-cov-on-fail",
]
)
)
elif option == "watch":
run(
split(
'ptw --runner "python3 -m pytest tests --durations=5 '
'--disable-pytest-warnings"'
)
)
elif option == "debug":
run(
split(
'ptw --pdb --runner "python3 -m pytest tests --durations=5 '
'--disable-pytest-warnings"'
)
)
if __name__ == "__main__":
HOST = os.getenv("FLASK_RUN_HOST", default="127.0.0.1")
PORT = os.getenv("FLASK_RUN_PORT", default=5000)
DEBUG = os.getenv("FLASK_DEBUG", default=False)
app.run(
host=HOST,
port=PORT,
debug=DEBUG,
)
if bool(DEBUG):
logger.info(f"Flask server is running in {os.getenv('ENV')}")
However, after running the tests, it shows the imports on top of the file being uncovered by tests. Does anyone know how to get rid of those uncovered lines?
You haven't shown the whole program, but I will guess that you are importing your product code at the top of this file. That means all of the top-level statements in the product code (import, class, def, etc) will have already run by the time the pytest-cov plugin starts the coverage measurement.
There are a few things you can do:
Run pytest from the command line instead of in-process in your product code.
Change this code to start pytest in a subprocess so that everything will be re-imported by pytest.
What you should not do: exclude import lines from coverage measurement. The .coveragerc you showed will ignore any line with the string "from" or "import" in it, which isn't what you want ("import" is in "do_important_thing()" for example!)
We're using pytest and python std logging, and have some tests in doctests. We'd like to enable log_cli to make debugging tests in ide easier (lets stderr flow to the "live" console so one can see log statements as they are output when stepping through) The problem is there appears to be a bug/iteraction between "use of logging" (eg presence of a call to logger.info("...") etc) log_cli=true.
I don't see any other flags or mention of this in the docs, so it appears to be a bug, but was hoping there is a workaround.
This test module passes:
# bugjar.py
"""
>>> dummy()
'retval'
"""
import logging
logger = logging.getLogger(__name__)
def dummy():
# logger.info("AnInfoLog") ## un-comment to break test
return "retval"
but un-commenting (no other changes) the logger.info( call causes a failure: (unless i remove log_cli from pytest.ini)
002 >>> dummy()
Expected:
'retval'
Got nothing
Here is my command line & relevant version output(s):
(venv) $ ./venv/bin/pytest -c ./pytest.ini ./bugjar.py
======================================================================= test session starts ========================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: -omitted-/test-unit, configfile: ./pytest.ini
plugins: forked-1.3.0, xdist-2.2.1, profiling-1.7.0
collected 1 item
bugjar.py::bugjar
bugjar.py::bugjar
-------------------------------------------------------------------------- live log call ---------------------------------------------------------------------------
INFO bugjar:bugjar.py:8 AnInfoLog
FAILED [100%]
============================================================================= FAILURES =============================================================================
_________________________________________________________________________ [doctest] bugjar _________________________________________________________________________
001
002 >>> dummy()
Expected:
'retval'
Got nothing
and my pytest.ini (Note, commments are correct, pass/fail is not affected by use of logfile or other addopts
[pytest]
addopts = --doctest-modules # --profile --profile-svg
norecursedirs = logs bin tmp* scratch venv* data
# log_file = pytest.log
log_cli = true
log_cli_level=debug
removing log_cli* from pytest.ini makes the issue go away.
This seems clearly related to what log_cli is manipulating, in capturing output for use in the doctest itself, but also not the expected behavior.
I am hoping I've made a mistake, or there is a workaround to get live log output in bash or IDE shell window / debugger.
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True
I've just started using pytest. Is there any way to record results in addition to the pass/fail status?
For example, suppose I have a test function like this:
#pytest.fixture(scope="session")
def server():
# something goes here to setup the server
def test_foo(server):
server.send_request()
response = server.get_response()
assert len(response) == 42
The test passes if the length of the response is 42. But I'd also like to record the response value as well ("...this call will be recorded for quality assurance purposes...."), even though I don't strictly require an exact value for the pass/fail criteria.
Use print result, then run py.test -s
-s tells py.test to not capture stdout and stdout.
Adapting your example:
# test_service.py
# ---------------
def test_request():
# response = server.get_response()
response = "{'some':'json'}"
assert len(response) == 15
print response, # comma prevents default newline
Running py.test -s produces
$ py.test -s test_service.py
=========================== test session starts ===========================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
collected 1 items
test_service.py {'some':'json'}.
======================== 1 passed in 0.04 seconds =========================
$
Or use python logging instead
# test_logging.py
# ---------------
import logging
logging.basicConfig(
filename="logresults.txt",
format="%(filename)s:%(lineno)d:%(funcName)s %(message)s")
def test_request():
response = "{'some':'json'}"
# print response, # comma prevents default newline
logging.warn("{'some':'json'}") # sorry, newline unavoidable
logging.warn("{'some':'other json'}")
Running py.test produces the machine readable file logresults.txt:
test_logging.py:11:test_request {'some':'json'}
test_logging.py:12:test_request {'some':'other json'}
Pro tip
Run vim logresults.txt +cbuffer to load the logresults.txt as your quickfix list.
see my example of passing test data to ELK
http://fruch.github.io/blog/2014/10/30/ELK-is-fun/
later I've made it a bit like this:
def pytest_configure(config):
# parameter to add analysis from tests teardowns, and etc.
config.analysis = []
def pytest_unconfigure(config):
# send config.analysis to where you want, i.e. file / DB / ELK
send_to_elk(config.analysis)
def test_example():
pytest.config.analysis += [ "My Data I want to keep" ]
this is per run/session data, and not per test (but I'm working on figuring out how to do it per test)
I'll try updating once I have a working example...