How to prevent test execution if fixture setup fails? - python

I want to test a DB application with pytest, but I want to run my tests only if the initial connection setup in my fixture is successful. Otherwise, I simply want the test runner to pass successfully. I came up with the following code:
import logging
import MySQLdb
import pytest
#pytest.fixture('module')
def setup_db():
try:
conn = MySQLdb.connect("127.0.0.1", 'testuser', 'testpassword', 'testdb')
yield conn
conn.close()
except Exception as e:
logging.exception("Failed to setup test database")
yield
def test_db(setup_db):
if not setup_db:
assert True # some dummy assert to mark test as True
else:
assert 4 == 1 + 3
As you can see this is hacky, cumbersome, and requires all of my tests to have this boilerplate condition to check if setup_db fixture actually yielded something or not.
What I would ideally want is to maybe return None or raise some exception which pytest framework can catch and stop running the test suit. I tried returning from the fixture but it didn't work.
Any ideas?

For cases where your tests and fixtures have external dependencies (ex. a working database) and it's not really possible to run them successfully on a specific environment (ex. on a CI/CD service on the cloud), then it's better to simply not run those tests in the first place. Or in reverse, only run those tests that are actually runnable on that environment.
This can be done using custom pytest markers.
As a sample setup, I have a tests folder with 2 sets of tests: 1 that requires a working DB and a working setup_db fixture (test_db.py), and 1 set of tests that have no external dependencies (test_utils.py).
$ tree tests
tests
├── pytest.ini
├── test_db.py
└── test_utils.py
The pytest.ini is where you register your markers:
[pytest]
markers =
uses_db: Tests for DB-related functions (requires a working DB)
utils: Tests for utility functions
cicd: Tests to be included in a CI/CD pipeline
The markers can be set on function-level or on a class/module-level. For this case, it's better to group all DB-related tests into their own files and to mark the entire file:
test_db.py
import pytest
pytestmark = [pytest.mark.uses_db]
# Define all test functions here...
test_utils.py
import pytest
pytestmark = [pytest.mark.utils, pytest.mark.cicd]
# Define all test functions here...
Here each group of tests is marked, and the test_utils.py is additionally marked with cicd indicating it's compatible with running in a CI/CD pipeline (you can name your markers any way you like, this is just an example I personally use).
Option 1
Tell pytest to run all tests except for tests marked with uses_db
$ pytest tests -v -m "not uses_db"
=========================== test session starts ===========================
...
collected 5 items / 3 deselected / 2 selected
tests/test_utils.py::test_utils_1 PASSED [ 50%]
tests/test_utils.py::test_utils_2 PASSED [100%]
Here pytest finds all the tests not marked with uses_db and ignores them ("deselected"). The test codes and any related fixture codes are not executed. You don't have to worry about catching and handling DB exceptions. None of those codes are run in the first place.
Option 2
Tell pytest to run only tests marked with cicd
$ pytest tests -v -m "cicd"
=========================== test session starts ===========================
...
collected 5 items / 3 deselected / 2 selected
tests/test_utils.py::test_utils_1 PASSED [ 50%]
tests/test_utils.py::test_utils_2 PASSED [100%]
The result should be the same in this case. Here, pytest find all the tests marked with cicd and ignores all the others ("deselected"). Same as in Option 1, none of the deselected tests and fixtures are run, so again you don't have to worry about catching and handling DB exceptions.
So, for the case where:
The issue is when I push this code, the ci/cd pipelines runs the test suit and if those machines do not have the testdb setup, the whole test suit will fail and ci/cd pipeline will abort and I don't want.
You either use Option 1 or Option 2 to select which tests to run.
This is a bit different than using .skip() (like in this answer) or ignoring fixture-related exceptions raised during the test execution because those solutions still evaluate and run the fixture and test codes. Having to skip/ignore a test because of Exception can be misleading. This answer avoids running them entirely.
It is also clearer because your run command explicitly states which sets of tests should be or should not be run, rather than relying on your test codes to handle possible Exceptions.
In addition, using markers allows using pytest --markers to give you a list of which tests can or cannot be run:
$ pytest tests --markers
#pytest.mark.uses_db: Tests for DB-related functions (requires a working DB)
#pytest.mark.utils: Tests for utility functions
#pytest.mark.cicd: Tests to be included in a CI/CD pipeline
...other built-in markers...

Thanks to #Mrbean Bremen suggestion I was able to skip tests from the fixture itself.
#pytest.fixture('module')
def setup_db():
try:
conn = MySQLdb.connect("127.0.0.1", 'testuse', 'testpassword', 'testdb')
yield conn
conn.close()
except Exception as e:
logging.exception("Failed to setup test database")
pytest.skip("****** DB Setup failed, skipping test suit *******", allow_module_level=True)

I recommend aborting or failing the entire test run if setting up the DB fixture fails, instead of just skipping the affected tests (as in this answer) or much worse, letting the tests pass successfully (as you mentioned at the start of your question: "I simply want the test runner to pass successfully.").
A failing DB fixture is a sign that something is wrong with your test environment, so there should be no point running any of the other tests until the problems with your environment are resolved. Even if the other tests pass, it can give you false positive results or a false sense of confidence if something is wrong with your test setup. You'd want the run to catastrophically fail with a clear error state that says "Hey, your test setup is broken".
There are 2 ways to abort the entire test run:
Call pytest.exit
#pytest.fixture(scope="module")
def setup_db():
try:
conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
yield conn
conn.close()
except Exception as e:
pytest.exit(f"Failed to setup test database: {e}", returncode=2)
It prints out:
=========================== test session starts ============================
...
collected 3 items
test.py::test_db_1
========================== no tests ran in 0.30s ===========================
!!! _pytest.outcomes.Exit: Failed to setup test database: Timeout Error !!!!
As shown, there should be 3 tests but it stopped after the 1st one failed to load the fixture. One nice thing about .exit() is the returncode parameter, which you can set to some error code value (typically some non-zero integer value). This works nicely with automated test runners, merge/pull request hooks, and CI/CD pipelines, because those typically check for non-zero exit codes.
Pass the -x or --exitfirst option
Code:
#pytest.fixture(scope="module")
def setup_db():
# Don't catch the error
conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
yield conn
conn.close()
Run:
$ pytest test.py -v -x
=========================== test session starts ===========================
...
collected 3 items
test.py::test_db_1 ERROR [ 33%]
================================= ERRORS ==================================
_______________________ ERROR at setup of test_db_1 _______________________
#pytest.fixture(scope="module")
def setup_db():
> conn = DB.connect("127.0.0.1", "testuser", "testpassword", "testdb")
test.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ip = '127.0.0.1', username = 'testuser', password = 'testpassword',
db_name = 'testdb'
#staticmethod
def connect(ip: str, username: str, password: str, db_name: str):
...
> raise Exception("Cannot connect to DB: Timeout Error")
E Exception: Cannot connect to DB: Timeout Error
test.py:16: Exception
======================= short test summary info ========================
ERROR test.py::test_db_1 - Exception: Cannot connect to DB: Timeout Error
!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.05s ===========================
Here, you don't catch the error in the fixture. Just let the Exception be raised and let it be unhandled, so that pytest will see that as a failed test, and then -x/--exitfirst aborts everything else on the 1st failed test.
Similar to pytext.exit(), the return/exit code will also be non-zero, so this works as well with automated test runners, merge/pull request hooks, and CI/CD pipelines.

Related

Algorithm for extracting first and last lines from sectionalized output file

I am trying to parse the FAILURES section from the terminal output of a Pytest session, line by line, identifying the testname and test filename for each test, which I then want to append together to form a "fully qualified test name" (FQTN), e.g. tests/test_1.py::test_3_fails. I also want to get and save the traceback info (which is what is between the testname and the test filename).
The parsing part is straightforward and I already have working regex's that match the test name and the test filename, and I can extract the traceback info based on that. My issue with the FQTNs is algorithmic - I can't seem to figure out the overall logic to identify a testname, then the test's filename, wihch occurs on a later line. I need to accommodate for not only the tests that are in the middle of the FAILURES section, but also the first test and the last test of the FAILURES section.
Here's an example. This is the output section for all failures during a test run, along with some of the terminal output that comes right before FAILURES, and right after.
.
.
.
============================================== ERRORS ===============================================
__________________________________ ERROR at setup of test_c_error ___________________________________
#pytest.fixture
def error_fixture():
> assert 0
E assert 0
tests/test_2.py:19: AssertionError
============================================= FAILURES ==============================================
___________________________________________ test_3_fails ____________________________________________
log_testname = None
def test_3_fails(log_testname):
> assert 0
E assert 0
tests/test_1.py:98: AssertionError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
______________________________________ test_8_causes_a_warning ______________________________________
log_testname = None
def test_8_causes_a_warning(log_testname):
> assert api_v1() == 1
E TypeError: api_v1() missing 1 required positional argument: 'log_testname'
tests/test_1.py:127: TypeError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
___________________________ test_16_fail_compare_dicts_for_pytest_icdiff ____________________________
def test_16_fail_compare_dicts_for_pytest_icdiff():
listofStrings = ["Hello", "hi", "there", "at", "this"]
listofInts = [7, 10, 45, 23, 77]
assert len(listofStrings) == len(listofInts)
> assert listofStrings == listofInts
E AssertionError: assert ['Hello', 'hi... 'at', 'this'] == [7, 10, 45, 23, 77]
E At index 0 diff: 'Hello' != 7
E Full diff:
E - [7, 10, 45, 23, 77]
E + ['Hello', 'hi', 'there', 'at', 'this']
tests/test_1.py:210: AssertionError
____________________________________________ test_b_fail ____________________________________________
def test_b_fail():
> assert 0
E assert 0
tests/test_2.py:27: AssertionError
============================================== PASSES ===============================================
___________________________________________ test_4_passes ___________________________________________
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
.
.
.
Is anyone here good with algorithms, maybe some pseudo code that shows an overall way of getting each testname and its associated test filename?
Here is my proposal to get the rendered summary for a test case report. Use this stub as a rough idea - you might want to iterate through the reports and dump the rendered summaries first, then do the curses magic to display the collected data.
Some tests to play with:
import pytest
def test_1():
assert False
def test_2():
raise RuntimeError('call error')
#pytest.fixture
def f():
raise RuntimeError('setup error')
def test_3(f):
assert True
#pytest.fixture
def g():
yield
raise RuntimeError('teardown error')
def test_4(g):
assert True
Dummy plugin example that renders the summary for test_3 case. Put the snippet in conftest.py:
def pytest_unconfigure(config):
# example: get rendered output for test case `test_spam.py::test_3`
# get the reporter
reporter = config.pluginmanager.getplugin('terminalreporter')
# create a buffer to dump reporter output to
import io
buf = io.StringIO()
# fake tty or pytest will not colorize the output
buf.isatty = lambda: True
# replace writer in reporter to dump the output in buffer instead of stdout
from _pytest.config import create_terminal_writer
# I want to use the reporter again later to dump the rendered output,
# so I store the original writer here (you probably don't need it)
original_writer = reporter._tw
writer = create_terminal_writer(config, file=buf)
# replace the writer
reporter._tw = writer
# find the report for `test_spam.py::test_3` (we already know it will be an error report)
errors = reporter.stats['error']
test_3_report = next(
report for report in errors if report.nodeid == 'test_spam.py::test_3'
)
# dump the summary along with the stack trace for the report of `test_spam.py::test_3`
reporter._outrep_summary(test_3_report)
# print dumped contents
# you probably don't need this - this is just for demo purposes
# restore the original writer to write to stdout again
reporter._tw = original_writer
reporter.section('My own section', sep='>')
reporter.write(buf.getvalue())
reporter.write_sep('<')
A pytest run now yields an additional section
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My own section >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
#pytest.fixture
def f():
> raise RuntimeError('setup error')
E RuntimeError: setup error
test_spam.py:14: RuntimeError
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
with the stack trace rendered same way pytest does in the ERRORS summary section. You can play with outcomes for different test cases if you want - replace the reporter.stats section if necessary (errors or failed, or even passed - although the summary should be empty for passed tests) and amend the test case nodeid.

pytest.skip within pytest_generate_tests skips all test functions in module instead of specific tests

I'm parameterizing pytest tests with variables defined in an external YAML file using the pytest_generate_tests hook. The name of the variable file is specified on the pytest command line (--params_file). Only some of the test functions within a module are parameterized and require the variables in this file. Thus, the command line option defining the variables is an optional argument. If the optional argument is omitted from the command line, then I want pytest to just "skip" those test functions which need the external parameterized variables and just run the "other" tests which are not parameterized. The problem is, if the command line option is omitted, pytest is skipping ALL of the test functions, not just the test functions that require the parameters.
Here is the test module file:
def test_network_validate_1(logger, device_connections,):
### Test code omitted.....
def test_lsp_throttle_timers(params_file, logger, device_connections):
### Test code omitted.....
def test_network_validate_2(logger, device_connections,):
### Test code omitted.....
pytest_generate_tests hook in conftest.py:
# Note, I tried scope at function level as well but that did not help
#pytest.fixture(scope='session')
def params_file(request):
pass
def pytest_generate_tests(metafunc):
### Get Pytest rootdir
rootdir = metafunc.config.rootdir
print(f"*********** Test Function: {metafunc.function.__name__}")
if "params_file" in metafunc.fixturenames:
print("*********** Hello Silver ****************")
if metafunc.config.getoption("--params_file"):
#################################################################
# Params file now located as a separate command line argument for
# greater flexibility
#################################################################
params_file = metafunc.config.getoption("--params_file")
params_doc = dnet_generic.open_yaml_file(Path(rootdir, params_file),
loader_type=yaml.Loader)
test_module = metafunc.module.__name__
test_function = metafunc.function.__name__
names,values = dnet_generic.get_test_parameters(test_module,
test_function,
params_doc,)
metafunc.parametrize(names, values )
else:
pytest.skip("This test requires the params_file argument")
When the params_file option is present, everything works fine:
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --params_file common/topoA_params.yml --collect-only
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items *********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Test Function: test_network_validate_2
collected 3 items
<Package '/home/as2863/pythonProjects/p1-automation/isis'>
<Module 'test_isis_lsp_throttle.py'>
<Function 'test_network_validate_1'>
<Function 'test_lsp_throttle_timers'>
<Function 'test_network_validate_2'>
================================================================================ no tests ran in 0.02 seconds =================================================================================
When the params_file option is ommitted, you can see that no tests are run and the print statement shows it does not even try to run pytest_generate_tests on "test_network_validate_2"
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --collect-only ===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items
*********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Hello Silver ****************
collected 0 items / 1 skipped
================================================================================== 1 skipped in 0.11 seconds ==================================================================================
As has been found in the discussion in the comments, you cannot use pytest.skip in pytest_generate_tests, because it will work on module scope. To skip the concrete test, you can do something like this:
#pytest.fixture
def skip_test():
pytest.skip('Some reason')
def pytest_generate_tests(metafunc):
if "params_file" in metafunc.fixturenames:
if metafunc.config.getoption("--params_file"):
...
metafunc.parametrize(names, values )
else:
metafunc.fixturenames.insert(0, 'skip_test')
E.g. you introduce a fixture that will skip the concrete test, and add this fixture to the test. Make sure to insert it as the first fixture, so no other fixtures will be executed.
While MrBean Bremen's answer may work, according to the pytest authors dynamically altering the fixture list is not something they really want to support. This approach, however, is a bit more supported.
# This is "auto used", but doesn't always skip the test unless the test parameters require it
#pytest.fixture(autouse=True)
def skip_test(request):
# For some reason this is only conditionally set if a param is passed
# https://github.com/pytest-dev/pytest/blob/791b51d0faea365aa9474bb83f9cd964fe265c21/src/_pytest/fixtures.py#L762
if not hasattr(request, 'param'):
return
pytest.skip(f"Test skipped: {request.param}")
And in your test module:
def _add_flag_parameter(metafunc: pytest.Metafunc, name: str):
if name not in metafunc.fixturenames:
return
flag_value = metafunc.config.getoption(name)
if flag_value:
metafunc.parametrize(name, [flag_value])
else:
metafunc.parametrize("skip_test", ["Missing flag '{name}'"], indirect=True)
def pytest_generate_tests(metafunc: pytest.Metafunc):
_add_flag_parameter(metafunc, "params_file")

How to load variables from .env file for pytests

I am writing an API in Flask and in some point I send email to users who register. I store variables concerning this email service in .env file. Now want to test a piece where I use these variables, but I have no idea how to load them from the .env file.
I tried basically all the answers here https://rb.gy/0nro1a, monkey patching setenv as show here https://rb.gy/kd07wa + other tips here and there. Each failed on some point. I also tried using pytest-dotenv. pytest-env, pytest.ini etc..but nothing really worked as expected, and it is all pretty confusing to me.
My pytests fixture looks like this
#pytest.fixture(autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
app.config["JWT_SECRET_KEY"] = "testing"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am wondering why I cant just simply load the .env file with a line like this load_dotenv(path/to/.env) somewhere in the set up of the fixture and be done?
Can someone explain to me as a newbie how to read the .env variables in a simple straightforward way to work with pytest?
The only way that actually works for me is to pass the environment variables on the command line as I run the tests.
FROM_EMAIL="some#email.com" MAILGUN_DOMAIN="sandbox6420919ab29b4228sdfda9d43ff37f7689072.mailgun.org" MAILGUN_API_KEY="245d6d0asldlasdkjfc380fba7fbskfsj1ad3125649esadbf2-7cd1ac2b-47fb3ac2" pytest tests
But this is a terrible way and I don't want to write all these var into the command line every time I run tests.
I just want to write pytest test, the .env file should be loaded somewhere automatically I believe. But where and how?
Any help appreciated.
If you install python-dotenv, you can use that to load the variables from the .env file. Here is a minimal example:
.env
SQLALCHEMY_DATABASE_URI="sqlite:///"
JWT_SECRET_KEY="testing"
test.py
import os
import pytest
from dotenv import load_dotenv
#pytest.fixture(scope='session', autouse=True)
def load_env():
load_dotenv()
#pytest.fixture(autouse=True)
def test_client_db():
print(f"\nSQLALCHEMY_DATABASE_URI"
f"={os.environ.get('SQLALCHEMY_DATABASE_URI')}")
print(f"JWT_SECRET_KEY={os.environ.get('JWT_SECRET_KEY')}")
def test():
pass
python -m pytest -s test.py gives:
============================================ test session starts ============================================
...
collected 1 item
test.py
SQLALCHEMY_DATABASE_URI=sqlite:///
JWT_SECRET_KEY=testing
.
============================================= 1 passed in 0.27s =============================================
e.g. the enviroment variables are set throughout the test session and can be used to configure your app. Note that I didn't provide a path in load_dotenv(), because I put the .env file in the same directory as the test - in your code, you probably have to add the path (load_dotenv(dotenv_path=your_path)).
You can use the monkeypatch fixture provided by pytest to manipulate env variables:
#pytest.fixture(scope="function")
def configured_env(monkeypatch):
monkeypatch.setenv("SQLALCHEMY_DATABASE_URI", "sqlite:///")
monkeypatch.setenv("JWT_SECRET_KEY", testing)
def test_client(configured_env):
#env variables are set here

How to set dynamic default parameters for py.test?

I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True

pytest recording results in addition to the pass/fail

I've just started using pytest. Is there any way to record results in addition to the pass/fail status?
For example, suppose I have a test function like this:
#pytest.fixture(scope="session")
def server():
# something goes here to setup the server
def test_foo(server):
server.send_request()
response = server.get_response()
assert len(response) == 42
The test passes if the length of the response is 42. But I'd also like to record the response value as well ("...this call will be recorded for quality assurance purposes...."), even though I don't strictly require an exact value for the pass/fail criteria.
Use print result, then run py.test -s
-s tells py.test to not capture stdout and stdout.
Adapting your example:
# test_service.py
# ---------------
def test_request():
# response = server.get_response()
response = "{'some':'json'}"
assert len(response) == 15
print response, # comma prevents default newline
Running py.test -s produces
$ py.test -s test_service.py
=========================== test session starts ===========================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
collected 1 items
test_service.py {'some':'json'}.
======================== 1 passed in 0.04 seconds =========================
$
Or use python logging instead
# test_logging.py
# ---------------
import logging
logging.basicConfig(
filename="logresults.txt",
format="%(filename)s:%(lineno)d:%(funcName)s %(message)s")
def test_request():
response = "{'some':'json'}"
# print response, # comma prevents default newline
logging.warn("{'some':'json'}") # sorry, newline unavoidable
logging.warn("{'some':'other json'}")
Running py.test produces the machine readable file logresults.txt:
test_logging.py:11:test_request {'some':'json'}
test_logging.py:12:test_request {'some':'other json'}
Pro tip
Run vim logresults.txt +cbuffer to load the logresults.txt as your quickfix list.
see my example of passing test data to ELK
http://fruch.github.io/blog/2014/10/30/ELK-is-fun/
later I've made it a bit like this:
def pytest_configure(config):
# parameter to add analysis from tests teardowns, and etc.
config.analysis = []
def pytest_unconfigure(config):
# send config.analysis to where you want, i.e. file / DB / ELK
send_to_elk(config.analysis)
def test_example():
pytest.config.analysis += [ "My Data I want to keep" ]
this is per run/session data, and not per test (but I'm working on figuring out how to do it per test)
I'll try updating once I have a working example...

Categories