I've just started using pytest. Is there any way to record results in addition to the pass/fail status?
For example, suppose I have a test function like this:
#pytest.fixture(scope="session")
def server():
# something goes here to setup the server
def test_foo(server):
server.send_request()
response = server.get_response()
assert len(response) == 42
The test passes if the length of the response is 42. But I'd also like to record the response value as well ("...this call will be recorded for quality assurance purposes...."), even though I don't strictly require an exact value for the pass/fail criteria.
Use print result, then run py.test -s
-s tells py.test to not capture stdout and stdout.
Adapting your example:
# test_service.py
# ---------------
def test_request():
# response = server.get_response()
response = "{'some':'json'}"
assert len(response) == 15
print response, # comma prevents default newline
Running py.test -s produces
$ py.test -s test_service.py
=========================== test session starts ===========================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
collected 1 items
test_service.py {'some':'json'}.
======================== 1 passed in 0.04 seconds =========================
$
Or use python logging instead
# test_logging.py
# ---------------
import logging
logging.basicConfig(
filename="logresults.txt",
format="%(filename)s:%(lineno)d:%(funcName)s %(message)s")
def test_request():
response = "{'some':'json'}"
# print response, # comma prevents default newline
logging.warn("{'some':'json'}") # sorry, newline unavoidable
logging.warn("{'some':'other json'}")
Running py.test produces the machine readable file logresults.txt:
test_logging.py:11:test_request {'some':'json'}
test_logging.py:12:test_request {'some':'other json'}
Pro tip
Run vim logresults.txt +cbuffer to load the logresults.txt as your quickfix list.
see my example of passing test data to ELK
http://fruch.github.io/blog/2014/10/30/ELK-is-fun/
later I've made it a bit like this:
def pytest_configure(config):
# parameter to add analysis from tests teardowns, and etc.
config.analysis = []
def pytest_unconfigure(config):
# send config.analysis to where you want, i.e. file / DB / ELK
send_to_elk(config.analysis)
def test_example():
pytest.config.analysis += [ "My Data I want to keep" ]
this is per run/session data, and not per test (but I'm working on figuring out how to do it per test)
I'll try updating once I have a working example...
Related
I am using Invoke and have two tasks: one cleans a raw data file to produce a clean data file, and the other produces several plots from the clean data file:
RAW_FILE = "raw.csv"
CLEAN_FILE = "clean.csv"
PLOT_FILE = "plot.svg"
#task(optional=["logging"])
def clean_data(c, logging=None):
"""Produce cleaned-up dataset."""
print("CLEAN", logging)
_configure_logging(logging)
df = clean_raw_data(RAW_FILE)
df.to_csv(CLEAN_FILE, index=False)
#task(pre=[clean_data], optional=["logging"])
def plot_data(c, logging=None):
"""Create plots of data."""
print("PLOT", logging)
_configure_logging(logging)
make_plot(CLEAN_FILE, PLOT_FILE)
def _configure_logging(log_level):
"""Initialize logging."""
if log_level is not None:
print("SETTING LOGGING TO", log_level)
CONFIG["LOGGING_LEVEL"] = log_level.upper()
If I run:
$ invoke clean-data --logging info
then logging is set to INFO and I get a message from inside clean_raw_data. However, if I run:
$ invoke plot-data --logging info
then:
clean_data is invoked with logging=None, so no log message appears.
plot_data is then invoked with logging="info", so its log message appears.
My expectation was that command-line flags would be passed down to dependent tasks. I tried doing this manually:
#task(pre=[call(clean_data, logging=logging)], optional=["logging"])
def plot_data(c, logging=None):
...as before...
but this produces an error message because logging isn't defined at the point the #task decorator is invoked.
Is there a way to chain optional arguments in the desired fashioned?
I am trying to parse the FAILURES section from the terminal output of a Pytest session, line by line, identifying the testname and test filename for each test, which I then want to append together to form a "fully qualified test name" (FQTN), e.g. tests/test_1.py::test_3_fails. I also want to get and save the traceback info (which is what is between the testname and the test filename).
The parsing part is straightforward and I already have working regex's that match the test name and the test filename, and I can extract the traceback info based on that. My issue with the FQTNs is algorithmic - I can't seem to figure out the overall logic to identify a testname, then the test's filename, wihch occurs on a later line. I need to accommodate for not only the tests that are in the middle of the FAILURES section, but also the first test and the last test of the FAILURES section.
Here's an example. This is the output section for all failures during a test run, along with some of the terminal output that comes right before FAILURES, and right after.
.
.
.
============================================== ERRORS ===============================================
__________________________________ ERROR at setup of test_c_error ___________________________________
#pytest.fixture
def error_fixture():
> assert 0
E assert 0
tests/test_2.py:19: AssertionError
============================================= FAILURES ==============================================
___________________________________________ test_3_fails ____________________________________________
log_testname = None
def test_3_fails(log_testname):
> assert 0
E assert 0
tests/test_1.py:98: AssertionError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
______________________________________ test_8_causes_a_warning ______________________________________
log_testname = None
def test_8_causes_a_warning(log_testname):
> assert api_v1() == 1
E TypeError: api_v1() missing 1 required positional argument: 'log_testname'
tests/test_1.py:127: TypeError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
___________________________ test_16_fail_compare_dicts_for_pytest_icdiff ____________________________
def test_16_fail_compare_dicts_for_pytest_icdiff():
listofStrings = ["Hello", "hi", "there", "at", "this"]
listofInts = [7, 10, 45, 23, 77]
assert len(listofStrings) == len(listofInts)
> assert listofStrings == listofInts
E AssertionError: assert ['Hello', 'hi... 'at', 'this'] == [7, 10, 45, 23, 77]
E At index 0 diff: 'Hello' != 7
E Full diff:
E - [7, 10, 45, 23, 77]
E + ['Hello', 'hi', 'there', 'at', 'this']
tests/test_1.py:210: AssertionError
____________________________________________ test_b_fail ____________________________________________
def test_b_fail():
> assert 0
E assert 0
tests/test_2.py:27: AssertionError
============================================== PASSES ===============================================
___________________________________________ test_4_passes ___________________________________________
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
.
.
.
Is anyone here good with algorithms, maybe some pseudo code that shows an overall way of getting each testname and its associated test filename?
Here is my proposal to get the rendered summary for a test case report. Use this stub as a rough idea - you might want to iterate through the reports and dump the rendered summaries first, then do the curses magic to display the collected data.
Some tests to play with:
import pytest
def test_1():
assert False
def test_2():
raise RuntimeError('call error')
#pytest.fixture
def f():
raise RuntimeError('setup error')
def test_3(f):
assert True
#pytest.fixture
def g():
yield
raise RuntimeError('teardown error')
def test_4(g):
assert True
Dummy plugin example that renders the summary for test_3 case. Put the snippet in conftest.py:
def pytest_unconfigure(config):
# example: get rendered output for test case `test_spam.py::test_3`
# get the reporter
reporter = config.pluginmanager.getplugin('terminalreporter')
# create a buffer to dump reporter output to
import io
buf = io.StringIO()
# fake tty or pytest will not colorize the output
buf.isatty = lambda: True
# replace writer in reporter to dump the output in buffer instead of stdout
from _pytest.config import create_terminal_writer
# I want to use the reporter again later to dump the rendered output,
# so I store the original writer here (you probably don't need it)
original_writer = reporter._tw
writer = create_terminal_writer(config, file=buf)
# replace the writer
reporter._tw = writer
# find the report for `test_spam.py::test_3` (we already know it will be an error report)
errors = reporter.stats['error']
test_3_report = next(
report for report in errors if report.nodeid == 'test_spam.py::test_3'
)
# dump the summary along with the stack trace for the report of `test_spam.py::test_3`
reporter._outrep_summary(test_3_report)
# print dumped contents
# you probably don't need this - this is just for demo purposes
# restore the original writer to write to stdout again
reporter._tw = original_writer
reporter.section('My own section', sep='>')
reporter.write(buf.getvalue())
reporter.write_sep('<')
A pytest run now yields an additional section
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My own section >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
#pytest.fixture
def f():
> raise RuntimeError('setup error')
E RuntimeError: setup error
test_spam.py:14: RuntimeError
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
with the stack trace rendered same way pytest does in the ERRORS summary section. You can play with outcomes for different test cases if you want - replace the reporter.stats section if necessary (errors or failed, or even passed - although the summary should be empty for passed tests) and amend the test case nodeid.
I want to test a DB application with pytest, but I want to run my tests only if the initial connection setup in my fixture is successful. Otherwise, I simply want the test runner to pass successfully. I came up with the following code:
import logging
import MySQLdb
import pytest
#pytest.fixture('module')
def setup_db():
try:
conn = MySQLdb.connect("127.0.0.1", 'testuser', 'testpassword', 'testdb')
yield conn
conn.close()
except Exception as e:
logging.exception("Failed to setup test database")
yield
def test_db(setup_db):
if not setup_db:
assert True # some dummy assert to mark test as True
else:
assert 4 == 1 + 3
As you can see this is hacky, cumbersome, and requires all of my tests to have this boilerplate condition to check if setup_db fixture actually yielded something or not.
What I would ideally want is to maybe return None or raise some exception which pytest framework can catch and stop running the test suit. I tried returning from the fixture but it didn't work.
Any ideas?
For cases where your tests and fixtures have external dependencies (ex. a working database) and it's not really possible to run them successfully on a specific environment (ex. on a CI/CD service on the cloud), then it's better to simply not run those tests in the first place. Or in reverse, only run those tests that are actually runnable on that environment.
This can be done using custom pytest markers.
As a sample setup, I have a tests folder with 2 sets of tests: 1 that requires a working DB and a working setup_db fixture (test_db.py), and 1 set of tests that have no external dependencies (test_utils.py).
$ tree tests
tests
├── pytest.ini
├── test_db.py
└── test_utils.py
The pytest.ini is where you register your markers:
[pytest]
markers =
uses_db: Tests for DB-related functions (requires a working DB)
utils: Tests for utility functions
cicd: Tests to be included in a CI/CD pipeline
The markers can be set on function-level or on a class/module-level. For this case, it's better to group all DB-related tests into their own files and to mark the entire file:
test_db.py
import pytest
pytestmark = [pytest.mark.uses_db]
# Define all test functions here...
test_utils.py
import pytest
pytestmark = [pytest.mark.utils, pytest.mark.cicd]
# Define all test functions here...
Here each group of tests is marked, and the test_utils.py is additionally marked with cicd indicating it's compatible with running in a CI/CD pipeline (you can name your markers any way you like, this is just an example I personally use).
Option 1
Tell pytest to run all tests except for tests marked with uses_db
$ pytest tests -v -m "not uses_db"
=========================== test session starts ===========================
...
collected 5 items / 3 deselected / 2 selected
tests/test_utils.py::test_utils_1 PASSED [ 50%]
tests/test_utils.py::test_utils_2 PASSED [100%]
Here pytest finds all the tests not marked with uses_db and ignores them ("deselected"). The test codes and any related fixture codes are not executed. You don't have to worry about catching and handling DB exceptions. None of those codes are run in the first place.
Option 2
Tell pytest to run only tests marked with cicd
$ pytest tests -v -m "cicd"
=========================== test session starts ===========================
...
collected 5 items / 3 deselected / 2 selected
tests/test_utils.py::test_utils_1 PASSED [ 50%]
tests/test_utils.py::test_utils_2 PASSED [100%]
The result should be the same in this case. Here, pytest find all the tests marked with cicd and ignores all the others ("deselected"). Same as in Option 1, none of the deselected tests and fixtures are run, so again you don't have to worry about catching and handling DB exceptions.
So, for the case where:
The issue is when I push this code, the ci/cd pipelines runs the test suit and if those machines do not have the testdb setup, the whole test suit will fail and ci/cd pipeline will abort and I don't want.
You either use Option 1 or Option 2 to select which tests to run.
This is a bit different than using .skip() (like in this answer) or ignoring fixture-related exceptions raised during the test execution because those solutions still evaluate and run the fixture and test codes. Having to skip/ignore a test because of Exception can be misleading. This answer avoids running them entirely.
It is also clearer because your run command explicitly states which sets of tests should be or should not be run, rather than relying on your test codes to handle possible Exceptions.
In addition, using markers allows using pytest --markers to give you a list of which tests can or cannot be run:
$ pytest tests --markers
#pytest.mark.uses_db: Tests for DB-related functions (requires a working DB)
#pytest.mark.utils: Tests for utility functions
#pytest.mark.cicd: Tests to be included in a CI/CD pipeline
...other built-in markers...
Thanks to #Mrbean Bremen suggestion I was able to skip tests from the fixture itself.
#pytest.fixture('module')
def setup_db():
try:
conn = MySQLdb.connect("127.0.0.1", 'testuse', 'testpassword', 'testdb')
yield conn
conn.close()
except Exception as e:
logging.exception("Failed to setup test database")
pytest.skip("****** DB Setup failed, skipping test suit *******", allow_module_level=True)
I recommend aborting or failing the entire test run if setting up the DB fixture fails, instead of just skipping the affected tests (as in this answer) or much worse, letting the tests pass successfully (as you mentioned at the start of your question: "I simply want the test runner to pass successfully.").
A failing DB fixture is a sign that something is wrong with your test environment, so there should be no point running any of the other tests until the problems with your environment are resolved. Even if the other tests pass, it can give you false positive results or a false sense of confidence if something is wrong with your test setup. You'd want the run to catastrophically fail with a clear error state that says "Hey, your test setup is broken".
There are 2 ways to abort the entire test run:
Call pytest.exit
#pytest.fixture(scope="module")
def setup_db():
try:
conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
yield conn
conn.close()
except Exception as e:
pytest.exit(f"Failed to setup test database: {e}", returncode=2)
It prints out:
=========================== test session starts ============================
...
collected 3 items
test.py::test_db_1
========================== no tests ran in 0.30s ===========================
!!! _pytest.outcomes.Exit: Failed to setup test database: Timeout Error !!!!
As shown, there should be 3 tests but it stopped after the 1st one failed to load the fixture. One nice thing about .exit() is the returncode parameter, which you can set to some error code value (typically some non-zero integer value). This works nicely with automated test runners, merge/pull request hooks, and CI/CD pipelines, because those typically check for non-zero exit codes.
Pass the -x or --exitfirst option
Code:
#pytest.fixture(scope="module")
def setup_db():
# Don't catch the error
conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
yield conn
conn.close()
Run:
$ pytest test.py -v -x
=========================== test session starts ===========================
...
collected 3 items
test.py::test_db_1 ERROR [ 33%]
================================= ERRORS ==================================
_______________________ ERROR at setup of test_db_1 _______________________
#pytest.fixture(scope="module")
def setup_db():
> conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
test.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ip = '127.0.0.1', username = 'testuser', password = 'testpassword',
db_name = 'testdb'
#staticmethod
def connect(ip: str, username: str, password: str, db_name: str):
...
> raise Exception("Cannot connect to DB: Timeout Error")
E Exception: Cannot connect to DB: Timeout Error
test.py:16: Exception
======================= short test summary info ========================
ERROR test.py::test_db_1 - Exception: Cannot connect to DB: Timeout Error
!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.05s ===========================
Here, you don't catch the error in the fixture. Just let the Exception be raised and let it be unhandled, so that pytest will see that as a failed test, and then -x/--exitfirst aborts everything else on the 1st failed test.
Similar to pytext.exit(), the return/exit code will also be non-zero, so this works as well with automated test runners, merge/pull request hooks, and CI/CD pipelines.
I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not.
Since I am not importing any module and not calling functions directly coverage.py report for this is 0%.
I want to know how can I generate coverage report for such integration HTTP request tests?
The recipe is pretty much this:
Ensure the backend starts in code coverage mode
Run the tests
Ensure the backend coverage is written to file
Read the coverage from file and append it to test run coverage
Example:
backend
Imagine you have a dummy backend server that responds with a "Hello World" page on GET requests:
# backend.py
from http.server import BaseHTTPRequestHandler, HTTPServer
class DummyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write('<html><body><h1>Hello World</h1></body></html>'.encode())
if __name__ == '__main__':
HTTPServer(('127.0.0.1', 8000), DummyHandler).serve_forever()
test
A simple test that makes an HTTP request and verifies the response contains "Hello World":
# tests/test_server.py
import requests
def test_GET():
resp = requests.get('http://127.0.0.1:8000')
resp.raise_for_status()
assert 'Hello World' in resp.text
Recipe
# tests/conftest.py
import os
import signal
import subprocess
import time
import coverage.data
import pytest
#pytest.fixture(autouse=True)
def run_backend(cov):
# 1.
env = os.environ.copy()
env['COVERAGE_FILE'] = '.coverage.backend'
serverproc = subprocess.Popen(['coverage', 'run', 'backend.py'], env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid)
time.sleep(3)
yield # 2.
# 3.
serverproc.send_signal(signal.SIGINT)
time.sleep(1)
# 4.
backendcov = coverage.data.CoverageData()
with open('.coverage.backend') as fp:
backendcov.read_fileobj(fp)
cov.data.update(backendcov)
cov is the fixture provided by pytest-cov (docs).
Running the test adds the coverage of backend.py to the overall coverage, although only tests selected:
$ pytest --cov=tests --cov-report term -vs
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 --
/data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/home/u0_a82/projects/stackoverflow/so-50689940, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 1 item
tests/test_server.py::test_GET PASSED
----------- coverage: platform linux, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
------------------------------------------
backend.py 12 0 100%
tests/conftest.py 18 0 100%
tests/test_server.py 5 0 100%
------------------------------------------
TOTAL 35 0 100%
============================ 1 passed in 5.09 seconds =============================
With Coverage 5.1, based on the "Measuring sub-processes" section of the coverage.py docs, you can set the COVERAGE_PROCESS_START env-var, call the coverage.process_startup() somewhere in your code. If you set parallel=True in your .coveragerc
Somewhere in your process, call this code:
import coverage
coverage.process_startup()
This can be done in sitecustomize.py globally, but in my case it was easy to add this to my application's __init__.py, where I added:
import os
if 'COVERAGE_PROCESS_START' in os.environ:
import coverage
coverage.process_startup()
Just to be safe, I added an additional check to this if statement (checking if MYAPP_COVERAGE_SUBPROCESS is also set)
In your test case, set the COVERAGE_PROCESS_START to the path to your .coveragerc file (or an empty string if don't need this config), for example:
import os
import subprocess
env = os.environ.copy()
env['COVERAGE_PROCESS_START'] = '.coveragerc'
cmd = [sys.executable, 'run_my_app.py']
p = subprocess.Popen(cmd, env=env)
p.communicate()
assert p.returncode == 0 # ..etc
Finally, you create .coveragerc containing:
[run]
parallel = True
source = myapp # Which module to collect coverage for
This ensures the .coverage files created by each process go to a unique file, which pytest-cov appears to merge automatically (or can be done manually with coverage combine). It also describes which modules to collect data for (the --cov=myapp arg doesn't get passed to child processes)
To run your tests, just invoke pytest --cov=
I have created sample project (PyCharm+Mac) to integrate SonarQube into python using nosetests and coverage :
src/Sample.py
import sys
def fact(n):
"""
Factorial function
:arg n: Number
:returns: factorial of n
"""
if n == 0:
return 1
return n * fact(n - 1)
def main(n):
res = fact(n)
print(res)
if __name__ == '__main__' and len(sys.argv) > 1:
main(int(sys.argv[1]))
test/SampleTest.py
import unittest
from src.Sample import fact
class TestFactorial(unittest.TestCase):
"""
Our basic test class
"""
def test_fact1(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(0)
self.assertEqual(res, 1)
def test_fac2(self):
"""
The actual test.
Any method which starts with ``test_`` will considered as a test case.
"""
res = fact(5)
self.assertEqual(res, 120)
if __name__ == '__main__':
unittest.main()
sonar-project.properties
sonar.projectKey=SonarQubeSample
sonar.projectName=Sonar Qube Sample
sonar.projectVersion=1.0
sonar.sources=src
sonar.tests=test
sonar.language=py
sonar.sourceEncoding=UTF-8
sonar.python.xunit.reportPath=nosetests.xml
sonar.python.coverage.reportPath=coverage.xml
sonar.python.coveragePlugin=cobertura
Below command will create nosetests.xml file successfully :
nosetests --with-xunit ./test/SampleTest.py
When i run below command :
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
It will given below result :
Name Stmts Miss Cover
-------------------------------------
src/Sample.py 10 6 40%
src/__init__.py 0 0 100%
-------------------------------------
TOTAL 10 6 40%
----------------------------------------------------------------------
Ran 0 tests in 0.011s
OK
Why fact function code not shown cover into SonarQube my project as below after running sonar-scanner command?
You should always try to make one test fail to be sure that your command tests something. The following command does not execute any tests:
nosetests --with-coverage --cover-package=src --cover-inclusive --cover-xml
One solution is to add test/*Test.py at the end.
To generate nosetests.xml and coverage.xml with only one command, you can execute:
nosetests --with-xunit --with-coverage --cover-package=src --cover-inclusive --cover-xml test/*Test.py
Note: You need to create a test/__init__.py file (even empty), so the file path in nosetests.xml can be resolved.
Note: You need at least SonarPython version 1.9 to parse coverage.xml